WorldWideScience

Sample records for hipoacusia sensorio neural

  1. Hipoacusia neurosensorial infantil

    OpenAIRE

    Santos Santos, Saturnino

    2004-01-01

    En nuestro medio existe un déficit de información acerca de la importancia de los factores de riesgo implicados en la aparición de hipoacusia neurosensorial infantil y de las etiologías encontradas. Se estudió retrospectivamente una población de 2.656 niños enviados a nuestro centro para valoración auditiva por presentar factores de riesgo. 481 niños fueron diagnosticados de hipoacusia neurosensorial uni o bilateral de cualquier grado. La edad media al diagnóstico de hipoacusia neurosensorial...

  2. HIPOACUSIA: TRASCENDENCIA, INCIDENCIA Y PREVALENCIA

    Directory of Open Access Journals (Sweden)

    Dra. Constanza Díaz

    2016-11-01

    Full Text Available La hipoacusia o discapacidad auditiva representa una condición prevalente en la población, afecta alrededor de 360 millones de personas en todo el mundo, determinando distintos niveles de discapacidad que van desde el aspecto físico hasta lo social y psicológico. El origen de la hipoacusia puede ser diverso, conocer sus causas y sus factores de riesgo asociados es primordial para el diagnóstico precoz y un tratamiento oportuno. Se espera que la incidencia y prevalencia de la hipoacusia aumente en forma importante en los próximos años debido al fenómeno de transición demográfica que se experimenta a nivel mundial. Es importante que el tratamiento y el enfoque de estos pacientes no solo se centre en la rehabilitación auditiva, si no también en la consejería y educación para la adherencia y los buenos resultados.

  3. HIPOACUSIA Y SISTEMA DE GARANTÍAS EXPLÍCITAS EN SALUD (GES)

    OpenAIRE

    Torrente, Dra. Mariela

    2016-01-01

    El Sistema de Garantías Explícitas en Salud (GES) incorpora tres patologías que se relacionan con hipoacusia: hipoacusia en el mayor de 65 años, hipoacusia bilateral del prematuro y tratamiento de la hipoacusia moderada y severa en el menor de 2 años. El presente artículo realiza un análisis crítico de las guías clínicas con énfasis en aspectos a mejorar.

  4. Comportamiento de la hipoacusia neurosensorial en niños

    OpenAIRE

    Álvarez Amador, Héctor Eduardo; Vega Ulloa, Nuris; Castillo Toledo, Luis; Santana Álvarez, Jorge; Betancourt Camargo, María de los Ángeles; Miranda Ramos, María de los Ángeles

    2011-01-01

    Fundamento: la hipoacusia neurosensorial en el niño produce graves consecuencias en la adquisición del lenguaje, atributo importante para un aprendizaje y desempeño social adecuados. Objetivo: estudiar el comportamiento de la hipoacusia neurosensorial en niños en la provincia de Camagüey. Método: se realizó un estudio descriptivo sobre el comportamiento de la hipoacusia neurosensorial en niños de la provincia de Camagüey en el período comprendido de enero de 2007 a diciembre de 2009. El unive...

  5. Aspectos éticos en el tamizaje de hipoacusia neonatal en Chile

    OpenAIRE

    Cardemil M, Felipe

    2012-01-01

    La hipoacusia neonatal representa una de las anormalidades congênitas más frecuentes. La importancia radica en que si no se detecta oportunamente, impacta en el desarrollo del lenguaje, en las habilidades de comunicación, y en el desarrollo cognitivo y social de las personas. En Chile no se tienen estimaciones certeras de la incidencia poblacional en los recién nacidos que padecen esta condición, debido a que no existe un programa nacional de tamizaje neonatal de hipoacusia. En el presente ar...

  6. Potenciales provocados auditivos en niños con riesgo neonatal de hipoacusia

    Directory of Open Access Journals (Sweden)

    Saúl Garza Morales

    1997-02-01

    Full Text Available Los potenciales provocados auditivos del tallo cerebral (PPATC son un método sencillo y no invasor de evaluación de la función auditiva, que se utiliza ampliamente en niños para detectar tempranamente hipoacusia. Entre abril de 1992 y mayo de 1994, se estudiaron 400 niños mexicanos que presentaban, al menos, un factor de riesgo neonatal de hipoacusia. La media de la edad de los niños estudiados fue 6,6 meses y la media de la edad gestacional al nacer, 35,1 semanas. El 51% de ellos fueron tratados con amikacina. Se registraron 1 427 factores de riesgo (3,5 por niño, entre los que predominaron la exposición a ototóxicos, la hiperbilirrubinemia y el peso al nacer menor de 1 500 g. En 27% se encontraron alteraciones auditivas de tipo periférico y en 13%, ausencia de respuesta a estímulos auditivos. El bajo peso y la menor edad gestacional al nacer, la concentración máxima de bilirrubina en el suero, la presencia de sepsis, la hemorragia subependimaria o intraventricular, la ventilación mecánica y la exposición a ototóxicos se asociaron significativamente con la presencia de hipoacusia grave o profunda.

  7. Ataxia heredo-degenerativa associada a hipoacusia

    Directory of Open Access Journals (Sweden)

    José Antonio Levy

    1964-06-01

    Full Text Available São estudados três irmãos, respectivamente com 16, 8 e 6 anos de idade, todos do sexo masculino, com ataxia heredo-degenerativa associada, em dois dêles, a hipoacusia. Nos antecedentes há referência a moléstia semelhante em um avô e um tio-avô. É discutido o diagnóstico diferencial com a moléstia de Pièrre Marie, a doença de Charcot-Marie-Tooth, a síndrome de Refsum e a neurite intersticial hipertrófica, sendo acentuada a semelhança dos casos estudados com a moléstia de Friedreich. São feitos comentários à associação da doença de Friedreich com distúrbios da audição.

  8. Valoración médico legal de la hipoacusia

    OpenAIRE

    Maikel Vargas Sanabria

    2012-01-01

    En la presente revisión se repasan los aspectos más básicos del sonido y el proceso de audición, en primer lugar los aspectos físicos del primero y luego los aspectos anatómicos y fisiológicos de dicho proceso, para que el perito tenga a mano los elementos necesarios para efectuar las pruebas clínicas y enviar los exámenes complementarios que considere pertinentes de acuerdo a su criterio para una adecuada valoración de la hipoacusia de origen laboral o secundaria a una tramaThe present artic...

  9. Perfil epidemiológico de la hipoacusia en un personal de ala rotatoria de la compañía Guaymaral (Policía Nacional De Colombia)

    OpenAIRE

    Vásquez Quintero, Rafael

    2013-01-01

    La hipoacusia neurosensorial es la pérdida de la audición producida por la lesión de elementos neurosensoriales cocleares o del nervio coclear debida a medios físicos o de otra naturaleza dentro de los cuales principalmente se encuentra el ruido. Una décima parte de las hipoacusias están relacionadas con la exposición al ruido laboral, sin embargo existen otros factores que se deben tener en cuenta como la exposición a ruido fuera del trabajo En este estudio descriptivo obse...

  10. Color del iris e hipoacusia en el Síndrome de Waardenburg. Pinar del Río, Cuba Color of the iris and hypoacusis in Waardenburg Syndrome. Pinar del Rio, Cuba

    Directory of Open Access Journals (Sweden)

    Fidel Castro Pérez

    2012-06-01

    Full Text Available Introducción: Aunque se han descrito hipoacusia neurosensorial y cambios de color en el iris, la relación entre estos no ha sido estudiada previamente. Objetivos: Describir y analizar la posible asociación de la hipoacusia y profundidad de ésta con el color del iris en una familia afectada con el síndrome, lo que constituiría un nuevo aporte al conocimiento del Síndrome de Waardenburg (SW. Material y Método: Se realizó un estudio de casos, observacional, transversal y descriptivo con algunos aspectos analíticos en personas con SW del Municipio Sandino. Se utilizaron las medidas de resumen para variables cualitativas y la prueba de X² para medir asociación al 95 % de certeza. Resultados: 15 individuos presentaron hipoacusia neurosensorial de diferentes distribución e intensidad, con predominio de los ojos pardos y azules bilaterales. Se detectó mayor frecuencia de individuos hipoacúsicos entre los que tenían ojos azules con asociación entre las dos variables (X²= 6,47, gl = 1; p = 0.01. La intensidad de la hipoacusia fue mayor entre los individuos con ojos azules (85.7 % con hipoacusia severa o profunda 3 veces superior que en los otros colores de los ojos. Conclusiones: Existe relación entre el color azul del iris y la presencia de la hipoacusia y mayor intensidad de esta última en individuos con SW.Background: Although sensorineural hearing loss and iris pigmentary changes have been described, the association between these two elements has not been previously studied. Objectives: to describe and analyze the possible association of hypoacusis and the intensity of this with the color of the iris in a family suffering from this syndrome; which will constitute a new contribution to the understanding of Waardenburg Syndrome (WS. Material and Method: an observational, cross-sectional and descriptive case-study was carried out having some analytic aspects in people suffering from WS in Sandino municipality, Pinar del Rio. Measures

  11. Evaluación de la hiperbilirrubinemia como factor de riesgo de hipoacusia neurosensorial en el programa de screening universal de hipoacusia infantil del Complejo Hospitalario Universitario Universitario Insular Materno Infantil de Gran Canaria ente los años 2007 al 2011

    OpenAIRE

    Corujo Santana, Cándido

    2014-01-01

    Programa de doctorado: Avances en Traumatología, Medicina del Deporte y Cuidados de Heridas. [ES] La bilirrubina es un pigmento altamente tóxico para los sitemas biológicos, especialmente para el sistema nervioso. El Joint Committee on Infant Heraing, en 1994, establece la lista de patologías en las que la incidencia de hipoacusia es mayor que las de la población general. En España, la CODEPEH ha confeccionado una lista de indicadores de riesgo (actualizada en 2010) que, cuando estén prese...

  12. Detección precoz de hipoacusia neonatal no congénita en recién nacidos sometidos a ventilación mecánica en una unidad de neonatología de junio – septiembre 2012

    OpenAIRE

    Díaz Torres, Mónica; Duque Cevallos, Sandra Marcela

    2013-01-01

    Introducción: La ventilación mecánica es una de las causas de hipoacusia en recién nacidos ingresados a una Unidad de Cuidados Intensivos Neonatales. Objetivo: Establecer el nivel de riesgo de los neonatos sometidos a ventilación mecánica de desarrollar hipoacusia no congénita en el Hospital Enrique Garcés de Quito durante Junio a septiembre del 2012. Sujeto: Se investigaron 101 pacientes que fueron Hospitalizados en la Unidad de Cuidados Intensivos Neonatales de los cuales el 20,79% re...

  13. Evaluación del riesgo de desarrollar hipoacusia en el colectivo de alumnos de conservatorios de música.

    OpenAIRE

    Santirso-Sánchez, Sara

    2013-01-01

    El presente trabajo analiza el riesgo de desarrollar hipoacusia inducida por el ruido en el colectivo de estudiantes de música como consecuencia de su propia actividad de práctica, estudio y ensayo con el instrumento, al estar expuestos de forma prolongada a sonidos de elevada intensidad. Por una parte se ha examinado la literatura sobre los problemas de audición en los músicos y se han revisado los conceptos físico-biológicos básicos en la audición, con objeto de identifica...

  14. Auditory evoked potentials in children at neonatal risk for hypoacusis Potenciales provocados auditivos en niños con riesgo neonatal de hipoacusia

    Directory of Open Access Journals (Sweden)

    Saúl Garza Morales

    1997-10-01

    Full Text Available Brainstem auditory evoked potentials provide a simple, noninvasive method of evaluating hearing function and have been widely used for early detection of hypoacusis in children. Between April 1992 and May 1994, a study was done of 400 Mexican children who presented at least one neonatal risk factor for hearing impairment. The average age of the children studied was 6.6 months and their average gestational age at birth was 35.1 weeks. Just over half of the children had been treated with amikacin. The study found 1427 risk factors (about 3.5 per child, the most common ones being exposure to ototoxic substances, hyperbilirubinemia, and birthweight Los potenciales provocados auditivos del tallo cerebral son un método sencillo y no invasor de evaluación de la función auditiva, que se utiliza ampliamente en niños para detectar tempranamente hipoacusia. Entre abril de 1992 y mayo de 1994, se estudiaron 400 niños mexicanos que presentaban, al menos, un factor de riesgo neonatal de hipoacusia. La media de la edad de los niños estudiados fue 6,6 meses y la media de la edad gestacional al nacer, 35,1 semanas. El 51% de ellos fueron tratados con amikacina. Se registraron 1427 factores de riesgo (3,5 por niño, entre los que predominaron la exposición a ototóxicos, la hiperbilirrubinemia y el peso al nacer <1 500 g. En 27% se encontraron alteraciones auditivas de tipo periférico y en 13%, ausencia de respuesta a estímulos auditivos. El bajo peso y la menor edad gestacional al nacer, la concentración máxima de bilirrubina en el suero, la presencia de sepsis, la hemorragia subependimaria o intraventricular, la ventilación mecánica y la exposición a ototóxicos se asociaron significativamente con la presencia de hipoacusia grave o profunda.

  15. Neuropatia óptica dominante associada à hipoacusia e apresentação tardia

    Directory of Open Access Journals (Sweden)

    Eduardo Scaldini Buscacio

    2013-10-01

    Full Text Available A neuropatia óptica de Kjer, ou atrofia óptica dominante, é a mais frequente das neuropatias ópticas familiares. Trata-se de uma atrofia óptica de caráter autossômico dominante que se dá por uma alteração no gene OPA1, no cromossomo 3q28, com penetrância de 98% Apenas 15% dos casos possuem acuidade visual de 0,1 ou pior, apresentando ainda diferentes graus de atrofia do disco. Este relato objetiva descrever as características genéticas e clínicas da doença, bem como apresentar medidas de aconselhamento familiar. Para isso, será relatado um caso clínico de atrofia óptica dominante no qual se constata perda acentuada da acuidade visual, início de manifestações atipicamente tardias e hipoacusia bilateral.

  16. Potenciales provocados auditivos en niños con riesgo neonatal de hipoacusia Auditory evoked potentials in children at neonatal risk for hypoacusis

    Directory of Open Access Journals (Sweden)

    Saúl Garza Morales

    1997-02-01

    Full Text Available Los potenciales provocados auditivos del tallo cerebral (PPATC son un método sencillo y no invasor de evaluación de la función auditiva, que se utiliza ampliamente en niños para detectar tempranamente hipoacusia. Entre abril de 1992 y mayo de 1994, se estudiaron 400 niños mexicanos que presentaban, al menos, un factor de riesgo neonatal de hipoacusia. La media de la edad de los niños estudiados fue 6,6 meses y la media de la edad gestacional al nacer, 35,1 semanas. El 51% de ellos fueron tratados con amikacina. Se registraron 1 427 factores de riesgo (3,5 por niño, entre los que predominaron la exposición a ototóxicos, la hiperbilirrubinemia y el peso al nacer menor de 1 500 g. En 27% se encontraron alteraciones auditivas de tipo periférico y en 13%, ausencia de respuesta a estímulos auditivos. El bajo peso y la menor edad gestacional al nacer, la concentración máxima de bilirrubina en el suero, la presencia de sepsis, la hemorragia subependimaria o intraventricular, la ventilación mecánica y la exposición a ototóxicos se asociaron significativamente con la presencia de hipoacusia grave o profunda.Auditory evoked potentials of the brain stem (AEPBS provide a simple, noninvasive method of evaluating hearing function and have been widely used for early detection of hypoacusis in children. Between April 1992 and May 1994, a study was done of 400 Mexican children who presented at least one neonatal risk factor for hearing impairment. The average age of the children studied was 6.6 months and their average gestational age at birth was 35.1 weeks. Just over half of them (51% had been treated with amikacin. The study found 1 427 risk factors (3.5 per child, the most common ones being exposure to ototoxic substances, hyperbilirubinemia, and birthweight of less that 1 500 g. In 27% of the children, peripheral auditory changes were found, and 13% did not respond to auditory stimuli. Low birthweight and young gestational age at birth, high

  17. Análisis Molecular de las Mutaciones 2299delG y C759F en Individuos Colombianos con Retinitis Pigmentosa e Hipoacusia Neurosensorial

    OpenAIRE

    López, Greizy; Gelvez, Nancy Yaneth; Urrego, Luisa Fernanda; Florez, Silvia; Medina, David; Rodríguez, Vicente; Tamayo, Marta Lucía

    2014-01-01

    Objetivo: Determinar la presencia de las mutaciones 2299delG y C759F en 37 individuos colombianos no relacionados con asociación de RP e hipoacusia neurosensorial. Materiales y métodos: análisis de secuencia directa del exón 13 del gen USH2A en todos los individuos seleccionados para el estudio. Resultados: la mutación 2299delG fue observada únicamente en individuos con Síndrome de Usher tipo II, mientras que la mutación C759F, no fue observada en ninguno de los individuos del estudio. Obj...

  18. Hipoacusia neurosensorial en un síndrome de Noonan y secuencia Poland Neurosensory hypoacusis in a Noonan's syndrome and Poland's sequence

    Directory of Open Access Journals (Sweden)

    Julianis Loraine Quintero Noa

    2010-09-01

    Full Text Available Se calcula que el 50 % de los casos de sordera profunda en la infancia puede ser de origen genético. Se presenta el caso de un niño de 9 años, atendido en los Servicios de Otorrinolaringología y Genética del Hospital Pediátrico Docente «William Soler», por presentar hipoacusia neurosensorial grave unilateral y displasia congénita de Mondini en el oído izquierdo, del lado contrario a la hipoplasia del músculo pectoral mayor, lo cual coincide con un síndrome de Noonan y secuencia de Poland, que resulta de especial interés. Se constató la hipoacusia con audiometría tonal y potencial evocado auditivo de tallo cerebral. En la tomografía del oído se observó una hipoplasia coclear con agenesia de la espira apical. Se destacan las manifestaciones clínicas y la importancia del estudio otológico e imaginólogico en el diagnóstico de la pérdida auditiva.It is estimated that the 50% of cases of deep deafness during childhood may be or genetic origin. This is the case of a child aged 9 seen in Otorhinolaryngology and Genetics Services of the "Wiliam Soler" Teaching Children Hospital due to a unilateral severe neurosensory hypoacusis and Mondini's congenital dysplasia in left ear contralateral to the major pectoral muscle hypoplasia, an interesting situation. Hypoacusis was confirmed using tone audiometry and auditory evoked potential of brain stem. Ear tomography demonstrated a cochlear hypoplasia with agenesis of apical spiral. The clinical manifestations and the significance of the ontological and imaging study in diagnosis of auditory loss are emphasized.

  19. Meningitis por Streptococcus suis en un paciente inmunocompetente Streptococcus suis meningitis in an immunocompetent patient

    Directory of Open Access Journals (Sweden)

    A. Nagel

    2008-09-01

    Full Text Available Se describe un caso de meningitis por Streptococcus suis en un paciente inmunocompetente. Presentaba astenia, debilidad generalizada, fiebre (39 °C, vómitos, deterioro del sensorio y desorientación témporo-espacial. Los cultivos de sangre (2/2 y de líquido cefalorraquídeo fueron positivos. La identificación preliminar se realizó utilizando las pruebas bioquímicas convencionales y fue completada en el Servicio Bacteriología Especial del INEI-ANLIS "Dr. Carlos G. Malbrán". Se comenzó el tratamiento con ampicilina y ceftriaxona. El microorganismo aislado demostró sensibilidad a ampicilina, cefotaxima y vancomicina. El paciente evolucionó favorablemente, pero se comprobó leve hipoacusia. Reingresó a los 4 meses con marcha atáxica, anacusia en oído izquierdo e hipoacusia en oído derecho. Continúa con seguimiento neurológico y audiométrico. Retrospectivamente se constató el contacto del paciente con cerdos. Se destaca la importancia de la anamnesis para alertar la sospecha de este agente etiológico en meningitis y bacteriemias.A case of Streptococcus suis meningitis is described in an immunocompetent patient presenting asthenia, general weakness, fever, vomiting, sensory deterioration and temporospatial disorder. The cerebrospinal fluid and two blood cultures (2/2 bottles were positive. The isolate was preliminary identified by conventional biochemical tests, and the identification was completed at the Special Bacteriology Service of INEI-ANLIS "Dr. Carlos G. Malbrán". Ampicillin and ceftriaxone treatment was initiated. The isolate was susceptible to ampicillin, cefotaxime and vancomycin. The patient experienced a good outcome but suffered hearing loss. However, after four months he returned with walking ataxia, deafness in his left ear, and hearing loss in the right ear. The patient’s retrospective exposure to pigs had been verified. It is important to evaluate predisposing and epidemiologic factors in order to alert about

  20. Beneficios económicos del implante coclear para la hipoacusia sensorineural profunda Economic benefits of the cochlear implant for treating profound sensorineural hearing loss

    Directory of Open Access Journals (Sweden)

    Augusto Peñaranda

    2012-04-01

    Full Text Available OBJETIVO: Evaluar el costo-beneficio (CB, costo-utilidad (CU y costo-efectividad (CE de la implantación coclear, comparándola con el uso de audífonos en niños con hipoacusia sensorineural profunda bilateral. MÉTODOS: Se empleó la técnica no paramétrica Propensity Score Matching (PSM para realizar la evaluación de impacto económico del implante y así llevar a cabo los análisis CB, CU y CE. Se utilizó información primaria, tomada aleatoriamente a 100 pacientes: 62 intervenidos quirúrgicamente con el implante coclear (grupo de tratamiento y 38 pertenecientes al grupo de control o usuarios de audífono para tratar la hipoacusia sensorineural profunda. RESULTADOS: Se halló un diferencial de costos económicos -en beneficio del implante coclear- cercano a US$ 204 000 entre el implante y el uso de audífonos durante la esperanza de vida de los pacientes analizados. Dicha cifra indica los mayores gastos que deben cubrir los pacientes con audífono. Con este valor descontado, el indicador costo-beneficio señala que por cada dólar invertido en el implante coclear, para tratar al paciente, el retorno de la inversión es US$ 2,07. CONCLUSIONES: El implante coclear genera beneficios económicos para el paciente. También produce utilidades en salud dado que se encontró una relación positiva de CU (ganancia en decibeles y CE (ganancia en discriminación del lenguaje.OBJECTIVE: Evaluate the cost-benefit, cost-utility, and cost-effectiveness of cochlear implantation, comparing it to the use of hearing aids in children with profound bilateral sensorineural hearing loss. METHODS: The nonparametric propensity score matching method was used to carry out an economic and impact assessment of the cochlear implant and then perform cost-benefit, cost-utility, and cost-effectiveness analyses. Primary information was used, taken randomly from 100 patients: 62 who received cochlear implants (treatment group and 38 belonging to the control group who used

  1. Hypoacousis prevalence in Kaiowá and Guarani indigenous children Prevalência de hipoacusia em crianças indígenas Kaiowá e Guarani

    Directory of Open Access Journals (Sweden)

    Renata Palópoli Pícoli

    2006-06-01

    Full Text Available OBJECTIVES: to determine hypoacousis prevalence in Kaiowá and Guarani indigenous children. METHODS: a cross-sectional study was performed using a sample of 126 indigenous children from zero to 59 months old from the Caarapó Indian Reserve, Mato Grosse do Sul, Brazil. Hearing ability screening was performed by measuring transient evoked otoacoustic emissions. Children with hearing impairment were retested. Confirmed cases following retest were referred to imitanciometry testing. RESULTS: during hearing ability screening, 25 (23.6% children showed hearing impairment. Seventeen children had normal outcomes during retest and six of them confirmed hearing impairment and were referred to imitanciometry testing. Hypoacousis prevalence identified by the study reached 5.6%, 3 (2.8% and 3 (2.8% suggestive of conductive and sensorineural types, respectively. The last ones were referred to complementary otorhinolaryngologic assessment for diagnosis confirmation. Hearing impairment cases determined by this study were not statistically significant when related to gender and age. CONCLUSIONS: problems concerning the prevalence of hearing impairment determined in the focused population suggest the need for hearing health programs to be developed with other child health programs.OBJETIVOS: identificar a prevalência de hipoacusia em crianças indígenas Kaiowá e Guarani. MÉTODOS: estudo transversal, com uma amostra de 126 crianças indígenas de zero a 59 meses da Terra Indígena de Caarapó, em Mato Grosso do Sul, Brasil. As crianças foram submetidas ao exame das emissões otoacústicas evocadas transitórias, que serviu como triagem auditiva. O reteste foi realizado nas crianças que apresentaram resultado alterado na triagem auditiva. Os casos que, no reteste, permaneceram alterados foram encaminhados para o exame da imitanciometria. RESULTADOS: na triagem auditiva, foram identificadas 25 (23,6% crianças com resultado alterado; dessas, 17 apresentaram

  2. Evaluation of Noise Effects in Auditory Function in Spanish Military Pilots

    Science.gov (United States)

    2005-04-01

    integrated helmet). • ENT data: - ENT history, otitis, barotitis, hipoacusia , other aditional data. - ENT Exam: general, otoscopy, impedianciometry...CABEZAS P.: Hipoacusias del Aviador. Rev. Aero. Astro. Julio 1972, Nº 380: 495-504. [3] GOMEZ CABEZAS P.: Barotraumatismos óticos y sinusales. Rev...1965, Nº 13: 50-51. [7] GOMEZ CABEZAS P.: Hipoacusias perceptivas del aviador. El trama acústico del personal volante y de las industrias aeronáuticas

  3. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  4. Disability due to auditory and vestibular dysfunction in a specialized care center

    OpenAIRE

    Gutiérrez-Márquez, Aralia; Jáuregui-Renaud, Kathrine; Viveros-Renteria, Leticia; Villanueva-Padrón, Laura Alejandra

    2005-01-01

    Objetivo: identificar la limitación que en su vida diaria padecen los pacientes evaluados por hipoacusia o enfermedad vestibular en un Centro de Atención Especializada del IMSS. Método: participaron 530 pacientes evaluados por primera vez con hipoacusia (n=252) o enfermedad vestibular (n=278), de los cuales eran económicamente activos 54 y 50% respectivamente. Después de la evaluación especializada y de la administración un cuestionario de síntomas co cleovestibulares, se identificó la frecue...

  5. Evolvable synthetic neural system

    Science.gov (United States)

    Curtis, Steven A. (Inventor)

    2009-01-01

    An evolvable synthetic neural system includes an evolvable neural interface operably coupled to at least one neural basis function. Each neural basis function includes an evolvable neural interface operably coupled to a heuristic neural system to perform high-level functions and an autonomic neural system to perform low-level functions. In some embodiments, the evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy.

  6. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Dotação, habilidades sociais e bem-estar subjetivo

    Directory of Open Access Journals (Sweden)

    Maria Luiza Pontes de França-Freitas

    2017-01-01

    Full Text Available El objetivo de este estudio fue identificar posibles diferencias y semejanzas en rela - ción a las habilidades sociales y al bienestar subjetivo, asociadas a los dominios de dotación (inteligencia general, creatividad, socio-afectivo y sensorio-motriz. Parti - ciparon 269 niños dotados y talentosos, de ambos sexos, entre los ocho y los doce años de edad. Los participantes respondieron al Sistema de Evaluación de Habilida - des Sociales (SSRS-BR, a la Escala Multidimensional de Satisfacción de Vida para Niños y a la Escala de Afecto Positivo y Negativo para Niños. Se verificaron diferen - cias en algunas clases de habilidades sociales y en algunos indicadores de bienestar subjetivo, para los dominios socio-afectivo y sensorio-motrices, y subdominios verbal y matemático, y este último subdominio es el que presentó mayores diferencias. Los resultados alcanzados traen algunas contribuciones el área, una vez que no fueron encontradas investigaciones comparando niños dotados en diferentes dominios en lo que se refiere a las diferentes variables de este estudio.

  8. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  9. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  10. Usher syndrome : molecular analysis of USH2 genes and development of a next-generation sequencing platform

    OpenAIRE

    García-García, Gema

    2013-01-01

    El síndrome de Usher (USH) es una enfermedad hereditaria autosómica recesiva, caracterizada por la asociación de hipoacusia neurosensorial, retinosis pigmentaria y, en ocasiones, alteración de la función vestibular. Clínicamente, el USH se puede clasificar en tres tipos (USH1, USH2 y USH3), principalmente en base a la gravedad y progresión de la hipoacusia y presencia o no de disfunción vestibular. El USH es heterogéneo tanto a nivel clínico como genético y, hasta la fecha, se han descrito 11...

  11. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  12. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  13. Evolvable Neural Software System

    Science.gov (United States)

    Curtis, Steven A.

    2009-01-01

    The Evolvable Neural Software System (ENSS) is composed of sets of Neural Basis Functions (NBFs), which can be totally autonomously created and removed according to the changing needs and requirements of the software system. The resulting structure is both hierarchical and self-similar in that a given set of NBFs may have a ruler NBF, which in turn communicates with other sets of NBFs. These sets of NBFs may function as nodes to a ruler node, which are also NBF constructs. In this manner, the synthetic neural system can exhibit the complexity, three-dimensional connectivity, and adaptability of biological neural systems. An added advantage of ENSS over a natural neural system is its ability to modify its core genetic code in response to environmental changes as reflected in needs and requirements. The neural system is fully adaptive and evolvable and is trainable before release. It continues to rewire itself while on the job. The NBF is a unique, bilevel intelligence neural system composed of a higher-level heuristic neural system (HNS) and a lower-level, autonomic neural system (ANS). Taken together, the HNS and the ANS give each NBF the complete capabilities of a biological neural system to match sensory inputs to actions. Another feature of the NBF is the Evolvable Neural Interface (ENI), which links the HNS and ANS. The ENI solves the interface problem between these two systems by actively adapting and evolving from a primitive initial state (a Neural Thread) to a complicated, operational ENI and successfully adapting to a training sequence of sensory input. This simulates the adaptation of a biological neural system in a developmental phase. Within the greater multi-NBF and multi-node ENSS, self-similar ENI s provide the basis for inter-NBF and inter-node connectivity.

  14. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  15. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Prevalência de sintomas auditivos e vestibulares em trabalhadores expostos a ruído ocupacional Prevalencia de síntomas auditivos y vestibulares en trabajadores expuestos al ruido ocupacional Prevalence of auditory and vestibular symptoms among workers exposed to occupational noise

    Directory of Open Access Journals (Sweden)

    Rosalina Ogido

    2009-04-01

    Full Text Available O objetivo do estudo foi estimar a prevalência de sintomas auditivos e vestibulares em trabalhadores expostos a ruído ocupacional. Foram analisados os prontuários de 175 trabalhadores com perda auditiva induzida por ruído, atendidos em um centro de referência de saúde ocupacional de Campinas, SP, de 1997 a 2003. As variáveis estudadas foram freqüência dos sintomas de hipoacusia, zumbido e vertigem. As associações com idade, tempo de exposição ao ruído e limiares auditivos tonais foram analisadas utilizando-se os testes estatísticos qui-quadrado e exato de Fisher. Foram relatados hipoacusia em 74% dos casos, zumbidos em 81% e vertigem em 13,2 %. Verificou-se associação entre hipoacusia e idade, tempo de exposição ao ruído e limiares auditivos tonais e entre vertigem e tempo de exposição ao ruído, não sendo encontradas outras associações significativas.El objetivo del estudio fue estimar la prevalencia de síntomas auditivos y vestibulares en trabajadores expuestos al ruido ocupacional. Fueron analizados los prontuarios de 175 trabajadores con pérdida auditiva inducida por ruido, atendidos en un centro de referencia de salud ocupacional de Campinas, Sureste de Brasil, de 1997 a 2003. Las variables estudiadas fueron frecuencia de los síntomas de hipoacusia, zumbido y vértigo. Las asociaciones con edad, tiempo de exposición al rudio y límites auditivos tonales fueron analizados utilizándose las pruebas estadísticas chi-cuadrado y exacto de Fisher. Fueron relatados hipoacusia en 74% de los casos, zumbidos en 81% y vértigo en 13,2%. Se verificó asociación entre hipoacusia y edad, tiempo de exposición al ruido y límites auditivos tonales y entre vértigo y tiempo de exposición al ruido, no siendo encontradas otras asociaciones significativas.The purpose of the study was to assess the prevalence of auditory and vestibular symptoms in workers exposed to occupational noise. There were examined medical records of 175

  17. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Neural chips, neural computers and application in high and superhigh energy physics experiments

    International Nuclear Information System (INIS)

    Nikityuk, N.M.; )

    2001-01-01

    Architecture peculiarity and characteristics of series of neural chips and neural computes used in scientific instruments are considered. Tendency of development and use of them in high energy and superhigh energy physics experiments are described. Comparative data which characterize the efficient use of neural chips for useful event selection, classification elementary particles, reconstruction of tracks of charged particles and for search of hypothesis Higgs particles are given. The characteristics of native neural chips and accelerated neural boards are considered [ru

  19. Neural tissue-spheres

    DEFF Research Database (Denmark)

    Andersen, Rikke K; Johansen, Mathias; Blaabjerg, Morten

    2007-01-01

    By combining new and established protocols we have developed a procedure for isolation and propagation of neural precursor cells from the forebrain subventricular zone (SVZ) of newborn rats. Small tissue blocks of the SVZ were dissected and propagated en bloc as free-floating neural tissue...... content, thus allowing experimental studies of neural precursor cells and their niche...

  20. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  1. The Neural Border: Induction, Specification and Maturation of the territory that generates Neural Crest cells.

    Science.gov (United States)

    Pla, Patrick; Monsoro-Burq, Anne H

    2018-05-28

    The neural crest is induced at the edge between the neural plate and the nonneural ectoderm, in an area called the neural (plate) border, during gastrulation and neurulation. In recent years, many studies have explored how this domain is patterned, and how the neural crest is induced within this territory, that also participates to the prospective dorsal neural tube, the dorsalmost nonneural ectoderm, as well as placode derivatives in the anterior area. This review highlights the tissue interactions, the cell-cell signaling and the molecular mechanisms involved in this dynamic spatiotemporal patterning, resulting in the induction of the premigratory neural crest. Collectively, these studies allow building a complex neural border and early neural crest gene regulatory network, mostly composed by transcriptional regulations but also, more recently, including novel signaling interactions. Copyright © 2018. Published by Elsevier Inc.

  2. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  3. Dynamics of neural cryptography.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  4. Dynamics of neural cryptography

    International Nuclear Information System (INIS)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-01-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible

  5. Dynamics of neural cryptography

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  6. Bioprinting for Neural Tissue Engineering.

    Science.gov (United States)

    Knowlton, Stephanie; Anand, Shivesh; Shah, Twisha; Tasoglu, Savas

    2018-01-01

    Bioprinting is a method by which a cell-encapsulating bioink is patterned to create complex tissue architectures. Given the potential impact of this technology on neural research, we review the current state-of-the-art approaches for bioprinting neural tissues. While 2D neural cultures are ubiquitous for studying neural cells, 3D cultures can more accurately replicate the microenvironment of neural tissues. By bioprinting neuronal constructs, one can precisely control the microenvironment by specifically formulating the bioink for neural tissues, and by spatially patterning cell types and scaffold properties in three dimensions. We review a range of bioprinted neural tissue models and discuss how they can be used to observe how neurons behave, understand disease processes, develop new therapies and, ultimately, design replacement tissues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Neural correlates and neural computations in posterior parietal cortex during perceptual decision-making

    Directory of Open Access Journals (Sweden)

    Alexander eHuk

    2012-10-01

    Full Text Available A recent line of work has found remarkable success in relating perceptual decision-making and the spiking activity in the macaque lateral intraparietal area (LIP. In this review, we focus on questions about the neural computations in LIP that are not answered by demonstrations of neural correlates of psychological processes. We highlight three areas of limitations in our current understanding of the precise neural computations that might underlie neural correlates of decisions: (1 empirical questions not yet answered by existing data; (2 implementation issues related to how neural circuits could actually implement the mechanisms suggested by both physiology and psychology; and (3 ecological constraints related to the use of well-controlled laboratory tasks and whether they provide an accurate window on sensorimotor computation. These issues motivate the adoption of a more general encoding-decoding framework that will be fruitful for more detailed contemplation of how neural computations in LIP relate to the formation of perceptual decisions.

  8. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  9. Neural Tube Defects

    Science.gov (United States)

    Neural tube defects are birth defects of the brain, spine, or spinal cord. They happen in the ... that she is pregnant. The two most common neural tube defects are spina bifida and anencephaly. In ...

  10. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  11. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  12. NeuroMEMS: Neural Probe Microtechnologies

    Directory of Open Access Journals (Sweden)

    Sam Musallam

    2008-10-01

    Full Text Available Neural probe technologies have already had a significant positive effect on our understanding of the brain by revealing the functioning of networks of biological neurons. Probes are implanted in different areas of the brain to record and/or stimulate specific sites in the brain. Neural probes are currently used in many clinical settings for diagnosis of brain diseases such as seizers, epilepsy, migraine, Alzheimer’s, and dementia. We find these devices assisting paralyzed patients by allowing them to operate computers or robots using their neural activity. In recent years, probe technologies were assisted by rapid advancements in microfabrication and microelectronic technologies and thus are enabling highly functional and robust neural probes which are opening new and exciting avenues in neural sciences and brain machine interfaces. With a wide variety of probes that have been designed, fabricated, and tested to date, this review aims to provide an overview of the advances and recent progress in the microfabrication techniques of neural probes. In addition, we aim to highlight the challenges faced in developing and implementing ultralong multi-site recording probes that are needed to monitor neural activity from deeper regions in the brain. Finally, we review techniques that can improve the biocompatibility of the neural probes to minimize the immune response and encourage neural growth around the electrodes for long term implantation studies.

  13. Estudo da prevalência de hipoacusia em indivíduos com diabetes mellitus tipo 1 Hearing loss prevalence in patients with diabetes mellitus type 1

    Directory of Open Access Journals (Sweden)

    Diego Augusto Malucelli

    2012-06-01

    Full Text Available Diabetes mellitus (DM é uma doença crônica causada pela não produção e uso inadequado de insulina. Enfermidade crônico-degenerativa. Complicações crônicas do DM, no sistema auditivo, podem causar atrofia do gânglio espiral, degeneração da bainha de mielina do VIII par craniano, diminuição de fibras nervosas na lâmina espiral ou espessamento das paredes capilares da estria vascular e das pequenas artérias. OBJETIVO: Verificar os limiares auditivos em indivíduos portadores de DM tipo 1. MATERIAL E MÉTODOS: Estudo clínico envolvendo 60 indivíduos, divididos em Grupo Estudo (GE e Grupo Controle (GC, indivíduos diabéticos e não diabéticos. Realizada anamnese, exame físico, otorrinolaringológico e exame audiométrico. RESULTADOS:Quanto aos limiares de audibilidade, no GE, houve diferença estatisticamente significante nas frequências 250, 500, 10.000, 11.200, 12.500, 14.000 e 16.000 Hz em ambas as orelhas e médias das orelhas. Na comparação dos GE e GC, houve diferença estatisticamente significativa com maior probabilidade de ocorrência de hipoacusia em alguma frequência independente da orelha testada no GE. CONCLUSÕES: Houve diferenças estatisticamente significativas nos achados audiológicos no GE quando comparado com GC, justificando avaliação audiológica completa em pacientes diabéticos tipo 1, incluindo audiometria de altas frequências.Diabetes mellitus (DM is a chronic degenerative disease that impairs normal insulin production and use. DM chronic auditory complications may include spiral ganglion atrophy, degeneration of the vestibulocochlear nerve myelin sheath, reduction of the number of spiral lamina nerve fibers, and thickening of the capillary walls of the stria vascularis and small arteries. OBJECTIVE: This paper aims to verify the hearing thresholds of individuals with type 1 DM. MATERIALS AND METHODS: Sixty patients were enrolled in this trial and divided into case and control groups featuring

  14. Neural Networks: Implementations and Applications

    OpenAIRE

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  15. Neurophysiology and neural engineering: a review.

    Science.gov (United States)

    Prochazka, Arthur

    2017-08-01

    Neurophysiology is the branch of physiology concerned with understanding the function of neural systems. Neural engineering (also known as neuroengineering) is a discipline within biomedical engineering that uses engineering techniques to understand, repair, replace, enhance, or otherwise exploit the properties and functions of neural systems. In most cases neural engineering involves the development of an interface between electronic devices and living neural tissue. This review describes the origins of neural engineering, the explosive development of methods and devices commencing in the late 1950s, and the present-day devices that have resulted. The barriers to interfacing electronic devices with living neural tissues are many and varied, and consequently there have been numerous stops and starts along the way. Representative examples are discussed. None of this could have happened without a basic understanding of the relevant neurophysiology. I also consider examples of how neural engineering is repaying the debt to basic neurophysiology with new knowledge and insight. Copyright © 2017 the American Physiological Society.

  16. Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); S.M. Bohte (Sander)

    2016-01-01

    textabstractBiological neurons communicate with a sparing exchange of pulses - spikes. It is an open question how real spiking neurons produce the kind of powerful neural computation that is possible with deep artificial neural networks, using only so very few spikes to communicate. Building on

  17. Artificial neural networks in NDT

    International Nuclear Information System (INIS)

    Abdul Aziz Mohamed

    2001-01-01

    Artificial neural networks, simply known as neural networks, have attracted considerable interest in recent years largely because of a growing recognition of the potential of these computational paradigms as powerful alternative models to conventional pattern recognition or function approximation techniques. The neural networks approach is having a profound effect on almost all fields, and has been utilised in fields Where experimental inter-disciplinary work is being carried out. Being a multidisciplinary subject with a broad knowledge base, Nondestructive Testing (NDT) or Nondestructive Evaluation (NDE) is no exception. This paper explains typical applications of neural networks in NDT/NDE. Three promising types of neural networks are highlighted, namely, back-propagation, binary Hopfield and Kohonen's self-organising maps. (Author)

  18. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  19. Cochlear anatomy: CT and MR imaging

    International Nuclear Information System (INIS)

    Martinez, Manuel; Bruno, Claudio; Martin, Eduardo; Canale, Nancy; De Luca, Laura; Spina, Juan C. h

    2002-01-01

    The authors present a brief overview of the normal cochlear anatomy with CT and MR images in order to allow a more complete identification of the pathological findings in patients with perceptive hipoacusia. (author)

  20. Papel funcional del óxido nítrico en el control de los movimientos oculares en el gato despierto

    OpenAIRE

    Moreno López, Bernardo

    1997-01-01

    La presencia del enzima nNOS en algunos núcleos sensoriales o premotores sugiere que el NO está implicado en procesos sensorio-motores, aunque hasta el momento no existen evidencias directas. El objetivo principal de este estudio ha sido analizar el papel del NO en el control de los movimientos oculares en el animal alerta. El sistema oculomotor se ha elegido por varios motivos: a) la existencia de neuronas nitrérgicas en diversos núcleos implicados en el control de los movimientos oculares h...

  1. Bidirectional neural interface: Closed-loop feedback control for hybrid neural systems.

    Science.gov (United States)

    Chou, Zane; Lim, Jeffrey; Brown, Sophie; Keller, Melissa; Bugbee, Joseph; Broccard, Frédéric D; Khraiche, Massoud L; Silva, Gabriel A; Cauwenberghs, Gert

    2015-01-01

    Closed-loop neural prostheses enable bidirectional communication between the biological and artificial components of a hybrid system. However, a major challenge in this field is the limited understanding of how these components, the two separate neural networks, interact with each other. In this paper, we propose an in vitro model of a closed-loop system that allows for easy experimental testing and modification of both biological and artificial network parameters. The interface closes the system loop in real time by stimulating each network based on recorded activity of the other network, within preset parameters. As a proof of concept we demonstrate that the bidirectional interface is able to establish and control network properties, such as synchrony, in a hybrid system of two neural networks more significantly more effectively than the same system without the interface or with unidirectional alternatives. This success holds promise for the application of closed-loop systems in neural prostheses, brain-machine interfaces, and drug testing.

  2. A new perspective on behavioral inconsistency and neural noise in aging: Compensatory speeding of neural communication

    Directory of Open Access Journals (Sweden)

    S. Lee Hong

    2012-09-01

    Full Text Available This paper seeks to present a new perspective on the aging brain. Here, we make connections between two key phenomena of brain aging: 1 increased neural noise or random background activity; and 2 slowing of brain activity. Our perspective proposes the possibility that the slowing of neural processing due to decreasing nerve conduction velocities leads to a compensatory speeding of neuron firing rates. These increased firing rates lead to a broader distribution of power in the frequency spectrum of neural oscillations, which we propose, can just as easily be interpreted as neural noise. Compensatory speeding of neural activity, as we present, is constrained by the: A availability of metabolic energy sources; and B competition for frequency bandwidth needed for neural communication. We propose that these constraints lead to the eventual inability to compensate for age-related declines in neural function that are manifested clinically as deficits in cognition, affect, and motor behavior.

  3. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  4. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  5. Creative-Dynamics Approach To Neural Intelligence

    Science.gov (United States)

    Zak, Michail A.

    1992-01-01

    Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.

  6. NMDA Receptor Signaling Is Important for Neural Tube Formation and for Preventing Antiepileptic Drug-Induced Neural Tube Defects.

    Science.gov (United States)

    Sequerra, Eduardo B; Goyal, Raman; Castro, Patricio A; Levin, Jacqueline B; Borodinsky, Laura N

    2018-05-16

    Failure of neural tube closure leads to neural tube defects (NTDs), which can have serious neurological consequences or be lethal. Use of antiepileptic drugs (AEDs) during pregnancy increases the incidence of NTDs in offspring by unknown mechanisms. Here we show that during Xenopus laevis neural tube formation, neural plate cells exhibit spontaneous calcium dynamics that are partially mediated by glutamate signaling. We demonstrate that NMDA receptors are important for the formation of the neural tube and that the loss of their function induces an increase in neural plate cell proliferation and impairs neural cell migration, which result in NTDs. We present evidence that the AED valproic acid perturbs glutamate signaling, leading to NTDs that are rescued with varied efficacy by preventing DNA synthesis, activating NMDA receptors, or recruiting the NMDA receptor target ERK1/2. These findings may prompt mechanistic identification of AEDs that do not interfere with neural tube formation. SIGNIFICANCE STATEMENT Neural tube defects are one of the most common birth defects. Clinical investigations have determined that the use of antiepileptic drugs during pregnancy increases the incidence of these defects in the offspring by unknown mechanisms. This study discovers that glutamate signaling regulates neural plate cell proliferation and oriented migration and is necessary for neural tube formation. We demonstrate that the widely used antiepileptic drug valproic acid interferes with glutamate signaling and consequently induces neural tube defects, challenging the current hypotheses arguing that they are side effects of this antiepileptic drug that cause the increased incidence of these defects. Understanding the mechanisms of neurotransmitter signaling during neural tube formation may contribute to the identification and development of antiepileptic drugs that are safer during pregnancy. Copyright © 2018 the authors 0270-6474/18/384762-12$15.00/0.

  7. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  8. Neural fields theory and applications

    CERN Document Server

    Graben, Peter; Potthast, Roland; Wright, James

    2014-01-01

    With this book, the editors present the first comprehensive collection in neural field studies, authored by leading scientists in the field - among them are two of the founding-fathers of neural field theory. Up to now, research results in the field have been disseminated across a number of distinct journals from mathematics, computational neuroscience, biophysics, cognitive science and others. Starting with a tutorial for novices in neural field studies, the book comprises chapters on emergent patterns, their phase transitions and evolution, on stochastic approaches, cortical development, cognition, robotics and computation, large-scale numerical simulations, the coupling of neural fields to the electroencephalogram and phase transitions in anesthesia. The intended readership are students and scientists in applied mathematics, theoretical physics, theoretical biology, and computational neuroscience. Neural field theory and its applications have a long-standing tradition in the mathematical and computational ...

  9. Chondroitin sulfate effects on neural stem cell differentiation.

    Science.gov (United States)

    Canning, David R; Brelsford, Natalie R; Lovett, Neil W

    2016-01-01

    We have investigated the role chondroitin sulfate has on cell interactions during neural plate formation in the early chick embryo. Using tissue culture isolates from the prospective neural plate, we have measured neural gene expression profiles associated with neural stem cell differentiation. Removal of chondroitin sulfate from stage 4 neural plate tissue leads to altered associations of N-cadherin-positive neural progenitors and causes changes in the normal sequence of neural marker gene expression. Absence of chondroitin sulfate in the neural plate leads to reduced Sox2 expression and is accompanied by an increase in the expression of anterior markers of neural regionalization. Results obtained in this study suggest that the presence of chondroitin sulfate in the anterior chick embryo is instrumental in maintaining cells in the neural precursor state.

  10. Neural processing of auditory signals and modular neural control for sound tropism of walking machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Pasemann, Frank; Fischer, Joern

    2005-01-01

    and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right....... The parameters of these networks are optimized by an evolutionary algorithm. In addition, a simple modular neural controller then generates the desired different walking patterns such that the machine walks straight, then turns towards a switched-on sound source, and then stops near to it....

  11. Enfermedad de Chagas en perros: Descripción de un caso clínico en Raza Cimarrón y su Diagnóstico Histopatológico (CHAGAS DISEASE IN DOGS: Clinic case description in a Cimarrón and Histopatologic diagnosis)

    OpenAIRE

    Dra. Heinsen, Teresita; Gonzalez, Mariana; Dra. Basmadjián, Yester; Dr. Terranova, Eduardo; Dr. De Oliveira, Victor;; Dr. Pacheco-da Silva, José P.

    2009-01-01

    ResumenEl día 17 de diciembre de 2008 en Médanos, Localidad de Canelones,Uruguay, llega a consulta un canino, macho, raza cimarrón, de 60días de vida, presentando debilidad, apatía e incordinación. Nacidopor parto natural en una camada de 5 cachorros, presentandodesarrollo normal hasta el día 53 de vida, comenzando con ptosispalpebral unilateral izquierda, acompañada de plejia de músculosfaciales y masticatorios, atrofia muscular generalizada posterior. A laclínica presentó sensorio normal, h...

  12. Elaboración y aplicación de la guía de técnicas grafo-plásticas Innovadoras“Mis Manchitas” para el desarrollo de la psicomotricidad fina de los niños y niñas del primer año de educación básica de la escuela básica “Yaruquies” de la ciudad de Riobamba, provincia de Chimborazo. Período 2013-2014.

    OpenAIRE

    Oleas Galeas, Rosa Guillermina

    2015-01-01

    68.- Las técnicas grafoplásticas constituyen una forma de expresión de los niños y niñas pues conducen a controlar los movimientos de las manos, el desarrollo de la pinza digital la precisión, el tono muscular, destrezas indispensables, en las que involucran todas las partes del cuerpo y establece las interacciones del conocimiento, emociones simbólicas y sensorio motrices para poder expresarse en el contexto psicosocial y trabajar desde todos sus ámbitos a través de técnicas activas ...

  13. Neural overlap in processing music and speech

    Science.gov (United States)

    Peretz, Isabelle; Vuvan, Dominique; Lagrois, Marie-Élaine; Armony, Jorge L.

    2015-01-01

    Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing. PMID:25646513

  14. Neural network to diagnose lining condition

    Science.gov (United States)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  15. The effect of the neural activity on topological properties of growing neural networks.

    Science.gov (United States)

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  16. Neural overlap in processing music and speech.

    Science.gov (United States)

    Peretz, Isabelle; Vuvan, Dominique; Lagrois, Marie-Élaine; Armony, Jorge L

    2015-03-19

    Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  17. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  18. Memristor-based neural networks

    International Nuclear Information System (INIS)

    Thomas, Andy

    2013-01-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (topical review)

  19. Neural crest contributions to the lamprey head

    Science.gov (United States)

    McCauley, David W.; Bronner-Fraser, Marianne

    2003-01-01

    The neural crest is a vertebrate-specific cell population that contributes to the facial skeleton and other derivatives. We have performed focal DiI injection into the cranial neural tube of the developing lamprey in order to follow the migratory pathways of discrete groups of cells from origin to destination and to compare neural crest migratory pathways in a basal vertebrate to those of gnathostomes. The results show that the general pathways of cranial neural crest migration are conserved throughout the vertebrates, with cells migrating in streams analogous to the mandibular and hyoid streams. Caudal branchial neural crest cells migrate ventrally as a sheet of cells from the hindbrain and super-pharyngeal region of the neural tube and form a cylinder surrounding a core of mesoderm in each pharyngeal arch, similar to that seen in zebrafish and axolotl. In addition to these similarities, we also uncovered important differences. Migration into the presumptive caudal branchial arches of the lamprey involves both rostral and caudal movements of neural crest cells that have not been described in gnathostomes, suggesting that barriers that constrain rostrocaudal movement of cranial neural crest cells may have arisen after the agnathan/gnathostome split. Accordingly, neural crest cells from a single axial level contributed to multiple arches and there was extensive mixing between populations. There was no apparent filling of neural crest derivatives in a ventral-to-dorsal order, as has been observed in higher vertebrates, nor did we find evidence of a neural crest contribution to cranial sensory ganglia. These results suggest that migratory constraints and additional neural crest derivatives arose later in gnathostome evolution.

  20. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  1. The Effects of GABAergic Polarity Changes on Episodic Neural Network Activity in Developing Neural Systems

    Directory of Open Access Journals (Sweden)

    Wilfredo Blanco

    2017-09-01

    Full Text Available Early in development, neural systems have primarily excitatory coupling, where even GABAergic synapses are excitatory. Many of these systems exhibit spontaneous episodes of activity that have been characterized through both experimental and computational studies. As development progress the neural system goes through many changes, including synaptic remodeling, intrinsic plasticity in the ion channel expression, and a transformation of GABAergic synapses from excitatory to inhibitory. What effect each of these, and other, changes have on the network behavior is hard to know from experimental studies since they all happen in parallel. One advantage of a computational approach is that one has the ability to study developmental changes in isolation. Here, we examine the effects of GABAergic synapse polarity change on the spontaneous activity of both a mean field and a neural network model that has both glutamatergic and GABAergic coupling, representative of a developing neural network. We find some intuitive behavioral changes as the GABAergic neurons go from excitatory to inhibitory, shared by both models, such as a decrease in the duration of episodes. We also find some paradoxical changes in the activity that are only present in the neural network model. In particular, we find that during early development the inter-episode durations become longer on average, while later in development they become shorter. In addressing this unexpected finding, we uncover a priming effect that is particularly important for a small subset of neurons, called the “intermediate neurons.” We characterize these neurons and demonstrate why they are crucial to episode initiation, and why the paradoxical behavioral change result from priming of these neurons. The study illustrates how even arguably the simplest of developmental changes that occurs in neural systems can present non-intuitive behaviors. It also makes predictions about neural network behavioral changes

  2. Podocalyxin Is a Novel Polysialylated Neural Adhesion Protein with Multiple Roles in Neural Development and Synapse Formation

    Science.gov (United States)

    Vitureira, Nathalia; Andrés, Rosa; Pérez-Martínez, Esther; Martínez, Albert; Bribián, Ana; Blasi, Juan; Chelliah, Shierley; López-Doménech, Guillermo; De Castro, Fernando; Burgaya, Ferran; McNagny, Kelly; Soriano, Eduardo

    2010-01-01

    Neural development and plasticity are regulated by neural adhesion proteins, including the polysialylated form of NCAM (PSA-NCAM). Podocalyxin (PC) is a renal PSA-containing protein that has been reported to function as an anti-adhesin in kidney podocytes. Here we show that PC is widely expressed in neurons during neural development. Neural PC interacts with the ERM protein family, and with NHERF1/2 and RhoA/G. Experiments in vitro and phenotypic analyses of podxl-deficient mice indicate that PC is involved in neurite growth, branching and axonal fasciculation, and that PC loss-of-function reduces the number of synapses in the CNS and in the neuromuscular system. We also show that whereas some of the brain PC functions require PSA, others depend on PC per se. Our results show that PC, the second highly sialylated neural adhesion protein, plays multiple roles in neural development. PMID:20706633

  3. Neural plasticity of development and learning.

    Science.gov (United States)

    Galván, Adriana

    2010-06-01

    Development and learning are powerful agents of change across the lifespan that induce robust structural and functional plasticity in neural systems. An unresolved question in developmental cognitive neuroscience is whether development and learning share the same neural mechanisms associated with experience-related neural plasticity. In this article, I outline the conceptual and practical challenges of this question, review insights gleaned from adult studies, and describe recent strides toward examining this topic across development using neuroimaging methods. I suggest that development and learning are not two completely separate constructs and instead, that they exist on a continuum. While progressive and regressive changes are central to both, the behavioral consequences associated with these changes are closely tied to the existing neural architecture of maturity of the system. Eventually, a deeper, more mechanistic understanding of neural plasticity will shed light on behavioral changes across development and, more broadly, about the underlying neural basis of cognition. (c) 2010 Wiley-Liss, Inc.

  4. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  5. Consciousness and neural plasticity

    DEFF Research Database (Denmark)

    changes or to abandon the strong identity thesis altogether. Were one to pursue a theory according to which consciousness is not an epiphenomenon to brain processes, consciousness may in fact affect its own neural basis. The neural correlate of consciousness is often seen as a stable structure, that is...

  6. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  7. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  8. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  9. Rhesus monkey neural stem cell transplantation promotes neural regeneration in rats with hippocampal lesions

    Directory of Open Access Journals (Sweden)

    Li-juan Ye

    2016-01-01

    Full Text Available Rhesus monkey neural stem cells are capable of differentiating into neurons and glial cells. Therefore, neural stem cell transplantation can be used to promote functional recovery of the nervous system. Rhesus monkey neural stem cells (1 × 105 cells/μL were injected into bilateral hippocampi of rats with hippocampal lesions. Confocal laser scanning microscopy demonstrated that green fluorescent protein-labeled transplanted cells survived and grew well. Transplanted cells were detected at the lesion site, but also in the nerve fiber-rich region of the cerebral cortex and corpus callosum. Some transplanted cells differentiated into neurons and glial cells clustering along the ventricular wall, and integrated into the recipient brain. Behavioral tests revealed that spatial learning and memory ability improved, indicating that rhesus monkey neural stem cells noticeably improve spatial learning and memory abilities in rats with hippocampal lesions.

  10. Global Robust Stability of Switched Interval Neural Networks with Discrete and Distributed Time-Varying Delays of Neural Type

    Directory of Open Access Journals (Sweden)

    Huaiqin Wu

    2012-01-01

    Full Text Available By combing the theories of the switched systems and the interval neural networks, the mathematics model of the switched interval neural networks with discrete and distributed time-varying delays of neural type is presented. A set of the interval parameter uncertainty neural networks with discrete and distributed time-varying delays of neural type are used as the individual subsystem, and an arbitrary switching rule is assumed to coordinate the switching between these networks. By applying the augmented Lyapunov-Krasovskii functional approach and linear matrix inequality (LMI techniques, a delay-dependent criterion is achieved to ensure to such switched interval neural networks to be globally asymptotically robustly stable in terms of LMIs. The unknown gain matrix is determined by solving this delay-dependent LMIs. Finally, an illustrative example is given to demonstrate the validity of the theoretical results.

  11. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  12. Spike Neural Models Part II: Abstract Neural Models

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2018-02-01

    Full Text Available Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF model which is not biologically realistic but does quickly and easily integrate input to produce spikes. Izhikevich's model is based on Hodgkin-Huxley's model but simplified such that it uses only two differentiation equations and four parameters to produce various realistic spike patterns. LIF is based on a standard electrical circuit and contains one equation. Either of these two models, or any of the many other models in literature can be used in a SNN. Choosing a neural model is an important task that depends on the goal of the research and the resources available. Once a model is chosen, network decisions such as connectivity, delay, and sparseness, need to be made. Understanding neural models and how they are incorporated into the network is the first step in creating a SNN.

  13. Noradrenergic modulation of neural erotic stimulus perception.

    Science.gov (United States)

    Graf, Heiko; Wiegers, Maike; Metzger, Coraline Danielle; Walter, Martin; Grön, Georg; Abler, Birgit

    2017-09-01

    We recently investigated neuromodulatory effects of the noradrenergic agent reboxetine and the dopamine receptor affine amisulpride in healthy subjects on dynamic erotic stimulus processing. Whereas amisulpride left sexual functions and neural activations unimpaired, we observed detrimental activations under reboxetine within the caudate nucleus corresponding to motivational components of sexual behavior. However, broadly impaired subjective sexual functioning under reboxetine suggested effects on further neural components. We now investigated the same sample under these two agents with static erotic picture stimulation as alternative stimulus presentation mode to potentially observe further neural treatment effects of reboxetine. 19 healthy males were investigated under reboxetine, amisulpride and placebo for 7 days each within a double-blind cross-over design. During fMRI static erotic picture were presented with preceding anticipation periods. Subjective sexual functions were assessed by a self-reported questionnaire. Neural activations were attenuated within the caudate nucleus, putamen, ventral striatum, the pregenual and anterior midcingulate cortex and in the orbitofrontal cortex under reboxetine. Subjective diminished sexual arousal under reboxetine was correlated with attenuated neural reactivity within the posterior insula. Again, amisulpride left neural activations along with subjective sexual functioning unimpaired. Neither reboxetine nor amisulpride altered differential neural activations during anticipation of erotic stimuli. Our results verified detrimental effects of noradrenergic agents on neural motivational but also emotional and autonomic components of sexual behavior. Considering the overlap of neural network alterations with those evoked by serotonergic agents, our results suggest similar neuromodulatory effects of serotonergic and noradrenergic agents on common neural pathways relevant for sexual behavior. Copyright © 2017 Elsevier B.V. and

  14. Emerging trends in neuro engineering and neural computation

    CERN Document Server

    Lee, Kendall; Garmestani, Hamid; Lim, Chee

    2017-01-01

    This book focuses on neuro-engineering and neural computing, a multi-disciplinary field of research attracting considerable attention from engineers, neuroscientists, microbiologists and material scientists. It explores a range of topics concerning the design and development of innovative neural and brain interfacing technologies, as well as novel information acquisition and processing algorithms to make sense of the acquired data. The book also highlights emerging trends and advances regarding the applications of neuro-engineering in real-world scenarios, such as neural prostheses, diagnosis of neural degenerative diseases, deep brain stimulation, biosensors, real neural network-inspired artificial neural networks (ANNs) and the predictive modeling of information flows in neuronal networks. The book is broadly divided into three main sections including: current trends in technological developments, neural computation techniques to make sense of the neural behavioral data, and application of these technologie...

  15. Active Neural Localization

    OpenAIRE

    Chaplot, Devendra Singh; Parisotto, Emilio; Salakhutdinov, Ruslan

    2018-01-01

    Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose "Active Neural Localizer", a fully differentiable neural network that learns to localize accurately and efficiently. The proposed model incorporates ideas of tradition...

  16. Artificial Neural Network Analysis System

    Science.gov (United States)

    2001-02-27

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  17. The equilibrium of neural firing: A mathematical theory

    Energy Technology Data Exchange (ETDEWEB)

    Lan, Sizhong, E-mail: lsz@fuyunresearch.org [Fuyun Research, Beijing, 100055 (China)

    2014-12-15

    Inspired by statistical thermodynamics, we presume that neuron system has equilibrium condition with respect to neural firing. We show that, even with dynamically changeable neural connections, it is inevitable for neural firing to evolve to equilibrium. To study the dynamics between neural firing and neural connections, we propose an extended communication system where noisy channel has the tendency towards fixed point, implying that neural connections are always attracted into fixed points such that equilibrium can be reached. The extended communication system and its mathematics could be useful back in thermodynamics.

  18. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  19. Neural Architectures for Control

    Science.gov (United States)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  20. Recycling signals in the neural crest

    OpenAIRE

    Taneyhill, Lisa A.; Bronner-Fraser, Marianne E.

    2006-01-01

    Vertebrate neural crest cells are multipotent and differentiate into structures that include cartilage and the bones of the face, as well as much of the peripheral nervous system. Understanding how different model vertebrates utilize signaling pathways reiteratively during various stages of neural crest formation and differentiation lends insight into human disorders associated with the neural crest.

  1. Recycling signals in the neural crest.

    Science.gov (United States)

    Taneyhill, Lisa A; Bronner-Fraser, Marianne

    2005-01-01

    Vertebrate neural crest cells are multipotent and differentiate into structures that include cartilage and the bones of the face, as well as much of the peripheral nervous system. Understanding how different model vertebrates utilize signaling pathways reiteratively during various stages of neural crest formation and differentiation lends insight into human disorders associated with the neural crest.

  2. Neural control of magnetic suspension systems

    Science.gov (United States)

    Gray, W. Steven

    1993-01-01

    The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.

  3. Neural components of altruistic punishment

    Directory of Open Access Journals (Sweden)

    Emily eDu

    2015-02-01

    Full Text Available Altruistic punishment, which occurs when an individual incurs a cost to punish in response to unfairness or a norm violation, may play a role in perpetuating cooperation. The neural correlates underlying costly punishment have only recently begun to be explored. Here we review the current state of research on the neural basis of altruism from the perspectives of costly punishment, emphasizing the importance of characterizing elementary neural processes underlying a decision to punish. In particular, we emphasize three cognitive processes that contribute to the decision to altruistically punish in most scenarios: inequity aversion, cost-benefit calculation, and social reference frame to distinguish self from others. Overall, we argue for the importance of understanding the neural correlates of altruistic punishment with respect to the core computations necessary to achieve a decision to punish.

  4. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  5. Neural computation and the computational theory of cognition.

    Science.gov (United States)

    Piccinini, Gualtiero; Bahar, Sonya

    2013-04-01

    We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism-neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation. Copyright © 2012 Cognitive Science Society, Inc.

  6. ANT Advanced Neural Tool

    Energy Technology Data Exchange (ETDEWEB)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-07-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs.

  7. ANT Advanced Neural Tool

    International Nuclear Information System (INIS)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-01-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs

  8. Early diagnosis and treatment of unilateral or asymmetrical hearing loss in childhood: 2017 CODEPEH recommendations

    Directory of Open Access Journals (Sweden)

    Faustino Núñez Batalla

    2018-06-01

    Full Text Available Núñez, F. et al. (2018: “Diagnóstico y tratamiento precoz de la hipoacusia unilateral o asimétrica en la infancia: recomendaciones CODEPEH 2017”. Revista Española de Discapacidad, 6 (I: 259-280.

  9. Dynamic decomposition of spatiotemporal neural signals.

    Directory of Open Access Journals (Sweden)

    Luca Ambrogioni

    2017-05-01

    Full Text Available Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals.

  10. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  11. Implantable neurotechnologies: a review of integrated circuit neural amplifiers.

    Science.gov (United States)

    Ng, Kian Ann; Greenwald, Elliot; Xu, Yong Ping; Thakor, Nitish V

    2016-01-01

    Neural signal recording is critical in modern day neuroscience research and emerging neural prosthesis programs. Neural recording requires the use of precise, low-noise amplifier systems to acquire and condition the weak neural signals that are transduced through electrode interfaces. Neural amplifiers and amplifier-based systems are available commercially or can be designed in-house and fabricated using integrated circuit (IC) technologies, resulting in very large-scale integration or application-specific integrated circuit solutions. IC-based neural amplifiers are now used to acquire untethered/portable neural recordings, as they meet the requirements of a miniaturized form factor, light weight and low power consumption. Furthermore, such miniaturized and low-power IC neural amplifiers are now being used in emerging implantable neural prosthesis technologies. This review focuses on neural amplifier-based devices and is presented in two interrelated parts. First, neural signal recording is reviewed, and practical challenges are highlighted. Current amplifier designs with increased functionality and performance and without penalties in chip size and power are featured. Second, applications of IC-based neural amplifiers in basic science experiments (e.g., cortical studies using animal models), neural prostheses (e.g., brain/nerve machine interfaces) and treatment of neuronal diseases (e.g., DBS for treatment of epilepsy) are highlighted. The review concludes with future outlooks of this technology and important challenges with regard to neural signal amplification.

  12. Stability of a neural predictive controller scheme on a neural model

    DEFF Research Database (Denmark)

    Luther, Jim Benjamin; Sørensen, Paul Haase

    2009-01-01

    In previous works presenting various forms of neural-network-based predictive controllers, the main emphasis has been on the implementation aspects, i.e. the development of a robust optimization algorithm for the controller, which will be able to perform in real time. However, the stability issue....... The resulting controller is tested on a nonlinear pneumatic servo system.......In previous works presenting various forms of neural-network-based predictive controllers, the main emphasis has been on the implementation aspects, i.e. the development of a robust optimization algorithm for the controller, which will be able to perform in real time. However, the stability issue...... has not been addressed specifically for these controllers. On the other hand a number of results concerning the stability of receding horizon controllers on a nonlinear system exist. In this paper we present a proof of stability for a predictive controller controlling a neural network model...

  13. Differentiation between non-neural and neural contributors to ankle joint stiffness in cerebral palsy

    NARCIS (Netherlands)

    De Gooijer-van de Groep, K.L.; De Vlugt, E.; De Groot, J.H.; Van der Heijden-Maessen, H.C.M.; Wielheesen, D.H.M.; Van Wijlen-Hempel, R.M.S.; Arendzen, J.H.; Meskers, C.G.M.

    2013-01-01

    Background Spastic paresis in cerebral palsy (CP) is characterized by increased joint stiffness that may be of neural origin, i.e. improper muscle activation caused by e.g. hyperreflexia or non-neural origin, i.e. altered tissue viscoelastic properties (clinically: “spasticity” vs. “contracture”).

  14. Infrared neural stimulation (INS) inhibits electrically evoked neural responses in the deaf white cat

    Science.gov (United States)

    Richter, Claus-Peter; Rajguru, Suhrud M.; Robinson, Alan; Young, Hunter K.

    2014-03-01

    Infrared neural stimulation (INS) has been used in the past to evoke neural activity from hearing and partially deaf animals. All the responses were excitatory. In Aplysia californica, Duke and coworkers demonstrated that INS also inhibits neural responses [1], which similar observations were made in the vestibular system [2, 3]. In deaf white cats that have cochleae with largely reduced spiral ganglion neuron counts and a significant degeneration of the organ of Corti, no cochlear compound action potentials could be observed during INS alone. However, the combined electrical and optical stimulation demonstrated inhibitory responses during irradiation with infrared light.

  15. Neurosecurity: security and privacy for neural devices.

    Science.gov (United States)

    Denning, Tamara; Matsuoka, Yoky; Kohno, Tadayoshi

    2009-07-01

    An increasing number of neural implantable devices will become available in the near future due to advances in neural engineering. This discipline holds the potential to improve many patients' lives dramatically by offering improved-and in some cases entirely new-forms of rehabilitation for conditions ranging from missing limbs to degenerative cognitive diseases. The use of standard engineering practices, medical trials, and neuroethical evaluations during the design process can create systems that are safe and that follow ethical guidelines; unfortunately, none of these disciplines currently ensure that neural devices are robust against adversarial entities trying to exploit these devices to alter, block, or eavesdrop on neural signals. The authors define "neurosecurity"-a version of computer science security principles and methods applied to neural engineering-and discuss why neurosecurity should be a critical consideration in the design of future neural devices.

  16. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  17. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  18. Cooperating attackers in neural cryptography.

    Science.gov (United States)

    Shacham, Lanir N; Klein, Einat; Mislovaty, Rachel; Kanter, Ido; Kinzel, Wolfgang

    2004-06-01

    A successful attack strategy in neural cryptography is presented. The neural cryptosystem, based on synchronization of neural networks by mutual learning, has been recently shown to be secure under different attack strategies. The success of the advanced attacker presented here, called the "majority-flipping attacker," does not decay with the parameters of the model. This attacker's outstanding success is due to its using a group of attackers which cooperate throughout the synchronization process, unlike any other attack strategy known. An analytical description of this attack is also presented, and fits the results of simulations.

  19. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  20. Decentralized neural control application to robotics

    CERN Document Server

    Garcia-Hernandez, Ramon; Sanchez, Edgar N; Alanis, Alma y; Ruz-Hernandez, Jose A

    2017-01-01

    This book provides a decentralized approach for the identification and control of robotics systems. It also presents recent research in decentralized neural control and includes applications to robotics. Decentralized control is free from difficulties due to complexity in design, debugging, data gathering and storage requirements, making it preferable for interconnected systems. Furthermore, as opposed to the centralized approach, it can be implemented with parallel processors. This approach deals with four decentralized control schemes, which are able to identify the robot dynamics. The training of each neural network is performed on-line using an extended Kalman filter (EKF). The first indirect decentralized control scheme applies the discrete-time block control approach, to formulate a nonlinear sliding manifold. The second direct decentralized neural control scheme is based on the backstepping technique, approximated by a high order neural network. The third control scheme applies a decentralized neural i...

  1. 22nd Italian Workshop on Neural Nets

    CERN Document Server

    Bassis, Simone; Esposito, Anna; Morabito, Francesco

    2013-01-01

    This volume collects a selection of contributions which has been presented at the 22nd Italian Workshop on Neural Networks, the yearly meeting of the Italian Society for Neural Networks (SIREN). The conference was held in Italy, Vietri sul Mare (Salerno), during May 17-19, 2012. The annual meeting of SIREN is sponsored by International Neural Network Society (INNS), European Neural Network Society (ENNS) and IEEE Computational Intelligence Society (CIS). The book – as well as the workshop-  is organized in three main components, two special sessions and a group of regular sessions featuring different aspects and point of views of artificial neural networks and natural intelligence, also including applications of present compelling interest.

  2. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  3. Introduction to neural networks

    International Nuclear Information System (INIS)

    Pavlopoulos, P.

    1996-01-01

    This lecture is a presentation of today's research in neural computation. Neural computation is inspired by knowledge from neuro-science. It draws its methods in large degree from statistical physics and its potential applications lie mainly in computer science and engineering. Neural networks models are algorithms for cognitive tasks, such as learning and optimization, which are based on concepts derived from research into the nature of the brain. The lecture first gives an historical presentation of neural networks development and interest in performing complex tasks. Then, an exhaustive overview of data management and networks computation methods is given: the supervised learning and the associative memory problem, the capacity of networks, the Perceptron networks, the functional link networks, the Madaline (Multiple Adalines) networks, the back-propagation networks, the reduced coulomb energy (RCE) networks, the unsupervised learning and the competitive learning and vector quantization. An example of application in high energy physics is given with the trigger systems and track recognition system (track parametrization, event selection and particle identification) developed for the CPLEAR experiment detectors from the LEAR at CERN. (J.S.). 56 refs., 20 figs., 1 tab., 1 appendix

  4. Neural Networks and Micromechanics

    Science.gov (United States)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  5. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  6. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  7. Efecto de la sincronización rítmica en pacientes con Trastorno del Espectro Autista

    OpenAIRE

    Díaz, V.

    2017-01-01

    Las investigaciones realizadas en los últimos años ponen el énfasis en la relación existente entre las alteraciones sensoriales y del movimiento en los trastornos del espectro autista, adjudicando el déficit a una alteración a nivel cortical y a una disfunción cerebelosa temprana. Desde el campo de la musicoterapia, se ha abordado el déficit sensorio-motor a través de técnicas para la compensación y/o rehabilitación de funciones cognitivas. El propósito de este trabajo es presentar un abordaj...

  8. Non-invasive neural stimulation

    Science.gov (United States)

    Tyler, William J.; Sanguinetti, Joseph L.; Fini, Maria; Hool, Nicholas

    2017-05-01

    Neurotechnologies for non-invasively interfacing with neural circuits have been evolving from those capable of sensing neural activity to those capable of restoring and enhancing human brain function. Generally referred to as non-invasive neural stimulation (NINS) methods, these neuromodulation approaches rely on electrical, magnetic, photonic, and acoustic or ultrasonic energy to influence nervous system activity, brain function, and behavior. Evidence that has been surmounting for decades shows that advanced neural engineering of NINS technologies will indeed transform the way humans treat diseases, interact with information, communicate, and learn. The physics underlying the ability of various NINS methods to modulate nervous system activity can be quite different from one another depending on the energy modality used as we briefly discuss. For members of commercial and defense industry sectors that have not traditionally engaged in neuroscience research and development, the science, engineering and technology required to advance NINS methods beyond the state-of-the-art presents tremendous opportunities. Within the past few years alone there have been large increases in global investments made by federal agencies, foundations, private investors and multinational corporations to develop advanced applications of NINS technologies. Driven by these efforts NINS methods and devices have recently been introduced to mass markets via the consumer electronics industry. Further, NINS continues to be explored in a growing number of defense applications focused on enhancing human dimensions. The present paper provides a brief introduction to the field of non-invasive neural stimulation by highlighting some of the more common methods in use or under current development today.

  9. Neural activation in stress-related exhaustion

    DEFF Research Database (Denmark)

    Gavelin, Hanna Malmberg; Neely, Anna Stigsdotter; Andersson, Micael

    2017-01-01

    The primary purpose of this study was to investigate the association between burnout and neural activation during working memory processing in patients with stress-related exhaustion. Additionally, we investigated the neural effects of cognitive training as part of stress rehabilitation. Fifty...... association between burnout level and working memory performance was found, however, our findings indicate that frontostriatal neural responses related to working memory were modulated by burnout severity. We suggest that patients with high levels of burnout need to recruit additional cognitive resources...... to uphold task performance. Following cognitive training, increased neural activation was observed during 3-back in working memory-related regions, including the striatum, however, low sample size limits any firm conclusions....

  10. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  11. Neural constructivism or self-organization?

    NARCIS (Netherlands)

    van der Maas, H.L.J.; Molenaar, P.C.M.

    2000-01-01

    Comments on the article by S. R. Quartz et al (see record 1998-00749-001) which discussed the constructivist perspective of interaction between cognition and neural processes during development and consequences for theories of learning. Three arguments are given to show that neural constructivism

  12. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  13. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  14. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. 38 CFR 17.149 - Sensori-neural aids.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Sensori-neural aids. 17... Prosthetic, Sensory, and Rehabilitative Aids § 17.149 Sensori-neural aids. (a) Notwithstanding any other provision of this part, VA will furnish needed sensori-neural aids (i.e., eyeglasses, contact lenses...

  16. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous ... process by training a number of neural networks. .... Matlab® version 6.1 was employed for building principal component ... provide a fair simulation of calibration data set with some degree.

  17. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  18. Neural networks at the Tevatron

    International Nuclear Information System (INIS)

    Badgett, W.; Burkett, K.; Campbell, M.K.; Wu, D.Y.; Bianchin, S.; DeNardi, M.; Pauletta, G.; Santi, L.; Caner, A.; Denby, B.; Haggerty, H.; Lindsey, C.S.; Wainer, N.; Dall'Agata, M.; Johns, K.; Dickson, M.; Stanco, L.; Wyss, J.L.

    1992-10-01

    This paper summarizes neural network applications at the Fermilab Tevatron, including the first online hardware application in high energy physics (muon tracking): the CDF and DO neural network triggers; offline quark/gluon discrimination at CDF; ND a new tool for top to multijets recognition at CDF

  19. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  20. Nonequilibrium landscape theory of neural networks.

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  1. Application of neural network to CT

    International Nuclear Information System (INIS)

    Ma, Xiao-Feng; Takeda, Tatsuoki

    1999-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multilayer neural network. Multilayer neural networks are extensively investigated and practically applied to solution of various problems such as inverse problems or time series prediction problems. From learning an input-output mapping from a set of examples, neural networks can be regarded as synthesizing an approximation of multidimensional function (that is, solving the problem of hypersurface reconstruction, including smoothing and interpolation). From this viewpoint, neural networks are well suited to the solution of CT image reconstruction. Though a conventionally used object function of a neural network is composed of a sum of squared errors of the output data, we can define an object function composed of a sum of residue of an integral equation. By employing an appropriate line integral for this integral equation, we can construct a neural network that can be used for CT. We applied this method to some model problems and obtained satisfactory results. As it is not necessary to discretized the integral equation using this reconstruction method, therefore it is application to the problem of complicated geometrical shapes is also feasible. Moreover, in neural networks, interpolation is performed quite smoothly, as a result, inverse mapping can be achieved smoothly even in case of including experimental and numerical errors, However, use of conventional back propagation technique for optimization leads to an expensive computation cost. To overcome this drawback, 2nd order optimization methods or parallel computing will be applied in future. (J.P.N.)

  2. Methodology of Neural Design: Applications in Microwave Engineering

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2006-06-01

    Full Text Available In the paper, an original methodology for the automatic creation of neural models of microwave structures is proposed and verified. Following the methodology, neural models of the prescribed accuracy are built within the minimum CPU time. Validity of the proposed methodology is verified by developing neural models of selected microwave structures. Functionality of neural models is verified in a design - a neural model is joined with a genetic algorithm to find a global minimum of a formulated objective function. The objective function is minimized using different versions of genetic algorithms, and their mutual combinations. The verified methodology of the automated creation of accurate neural models of microwave structures, and their association with global optimization routines are the most important original features of the paper.

  3. Differentiation between non-neural and neural contributors to ankle joint stiffness in cerebral palsy.

    Science.gov (United States)

    de Gooijer-van de Groep, Karin L; de Vlugt, Erwin; de Groot, Jurriaan H; van der Heijden-Maessen, Hélène C M; Wielheesen, Dennis H M; van Wijlen-Hempel, Rietje M S; Arendzen, J Hans; Meskers, Carel G M

    2013-07-23

    Spastic paresis in cerebral palsy (CP) is characterized by increased joint stiffness that may be of neural origin, i.e. improper muscle activation caused by e.g. hyperreflexia or non-neural origin, i.e. altered tissue viscoelastic properties (clinically: "spasticity" vs. "contracture"). Differentiation between these components is hard to achieve by common manual tests. We applied an assessment instrument to obtain quantitative measures of neural and non-neural contributions to ankle joint stiffness in CP. Twenty-three adolescents with CP and eleven healthy subjects were seated with their foot fixated to an electrically powered single axis footplate. Passive ramp-and-hold rotations were applied over full ankle range of motion (RoM) at low and high velocities. Subject specific tissue stiffness, viscosity and reflexive torque were estimated from ankle angle, torque and triceps surae EMG activity using a neuromuscular model. In CP, triceps surae reflexive torque was on average 5.7 times larger (p = .002) and tissue stiffness 2.1 times larger (p = .018) compared to controls. High tissue stiffness was associated with reduced RoM (p therapy.

  4. Differentiation state determines neural effects on microvascular endothelial cells

    International Nuclear Information System (INIS)

    Muffley, Lara A.; Pan, Shin-Chen; Smith, Andria N.; Ga, Maricar; Hocking, Anne M.; Gibran, Nicole S.

    2012-01-01

    Growing evidence indicates that nerves and capillaries interact paracrinely in uninjured skin and cutaneous wounds. Although mature neurons are the predominant neural cell in the skin, neural progenitor cells have also been detected in uninjured adult skin. The aim of this study was to characterize differential paracrine effects of neural progenitor cells and mature sensory neurons on dermal microvascular endothelial cells. Our results suggest that neural progenitor cells and mature sensory neurons have unique secretory profiles and distinct effects on dermal microvascular endothelial cell proliferation, migration, and nitric oxide production. Neural progenitor cells and dorsal root ganglion neurons secrete different proteins related to angiogenesis. Specific to neural progenitor cells were dipeptidyl peptidase-4, IGFBP-2, pentraxin-3, serpin f1, TIMP-1, TIMP-4 and VEGF. In contrast, endostatin, FGF-1, MCP-1 and thrombospondin-2 were specific to dorsal root ganglion neurons. Microvascular endothelial cell proliferation was inhibited by dorsal root ganglion neurons but unaffected by neural progenitor cells. In contrast, microvascular endothelial cell migration in a scratch wound assay was inhibited by neural progenitor cells and unaffected by dorsal root ganglion neurons. In addition, nitric oxide production by microvascular endothelial cells was increased by dorsal root ganglion neurons but unaffected by neural progenitor cells. -- Highlights: ► Dorsal root ganglion neurons, not neural progenitor cells, regulate microvascular endothelial cell proliferation. ► Neural progenitor cells, not dorsal root ganglion neurons, regulate microvascular endothelial cell migration. ► Neural progenitor cells and dorsal root ganglion neurons do not effect microvascular endothelial tube formation. ► Dorsal root ganglion neurons, not neural progenitor cells, regulate microvascular endothelial cell production of nitric oxide. ► Neural progenitor cells and dorsal root

  5. Proposal of a model of mammalian neural induction

    Science.gov (United States)

    Levine, Ariel J.; Brivanlou, Ali H.

    2009-01-01

    How does the vertebrate embryo make a nervous system? This complex question has been at the center of developmental biology for many years. The earliest step in this process – the induction of neural tissue – is intimately linked to patterning of the entire early embryo, and the molecular and embryological basis these processes are beginning to emerge. Here, we analyze classic and cutting-edge findings on neural induction in the mouse. We find that data from genetics, tissue explants, tissue grafting, and molecular marker expression support a coherent framework for mammalian neural induction. In this model, the gastrula organizer of the mouse embryo inhibits BMP signaling to allow neural tissue to form as a default fate – in the absence of instructive signals. The first neural tissue induced is anterior and subsequent neural tissue is posteriorized to form the midbrain, hindbrain, and spinal cord. The anterior visceral endoderm protects the pre-specified anterior neural fate from similar posteriorization, allowing formation of forebrain. This model is very similar to the default model of neural induction in the frog, thus bridging the evolutionary gap between amphibians and mammals. PMID:17585896

  6. Tuning Neural Phase Entrainment to Speech.

    Science.gov (United States)

    Falk, Simone; Lanzilotti, Cosima; Schön, Daniele

    2017-08-01

    Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.

  7. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  8. Neural synchronization during face-to-face communication.

    Science.gov (United States)

    Jiang, Jing; Dai, Bohan; Peng, Danling; Zhu, Chaozhe; Liu, Li; Lu, Chunming

    2012-11-07

    Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., telephone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face communication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-face dialog between partners but none during a back-to-back dialog, a face-to-face monologue, or a back-to-back monologue. Moreover, the neural synchronization between partners during the face-to-face dialog resulted primarily from the direct interactions between the partners, including multimodal sensory information integration and turn-taking behavior. The communicating behavior during the face-to-face dialog could be predicted accurately based on the neural synchronization level. These results suggest that face-to-face communication, particularly dialog, has special neural features that other types of communication do not have and that the neural synchronization between partners may underlie successful face-to-face communication.

  9. Neural principles of memory and a neural theory of analogical insight

    Science.gov (United States)

    Lawson, David I.; Lawson, Anton E.

    1993-12-01

    Grossberg's principles of neural modeling are reviewed and extended to provide a neural level theory to explain how analogies greatly increase the rate of learning and can, in fact, make learning and retention possible. In terms of memory, the key point is that the mind is able to recognize and recall when it is able to match sensory input from new objects, events, or situations with past memory records of similar objects, events, or situations. When a match occurs, an adaptive resonance is set up in which the synaptic strengths of neurons are increased; thus a long term record of the new input is formed in memory. Systems of neurons called outstars and instars are presumably the underlying units that enable this to occur. Analogies can greatly facilitate learning and retention because they activate the outstars (i.e., the cells that are sampling the to-be-learned pattern) and cause the neural activity to grow exponentially by forming feedback loops. This increased activity insures the boost in synaptic strengths of neurons, thus causing storage and retention in long-term memory (i.e., learning).

  10. Neural Network Based Load Frequency Control for Restructuring ...

    African Journals Online (AJOL)

    Neural Network Based Load Frequency Control for Restructuring Power Industry. ... an artificial neural network (ANN) application of load frequency control (LFC) of a Multi-Area power system by using a neural network controller is presented.

  11. Neural substrates of decision-making.

    Science.gov (United States)

    Broche-Pérez, Y; Herrera Jiménez, L F; Omar-Martínez, E

    2016-06-01

    Decision-making is the process of selecting a course of action from among 2 or more alternatives by considering the potential outcomes of selecting each option and estimating its consequences in the short, medium and long term. The prefrontal cortex (PFC) has traditionally been considered the key neural structure in decision-making process. However, new studies support the hypothesis that describes a complex neural network including both cortical and subcortical structures. The aim of this review is to summarise evidence on the anatomical structures underlying the decision-making process, considering new findings that support the existence of a complex neural network that gives rise to this complex neuropsychological process. Current evidence shows that the cortical structures involved in decision-making include the orbitofrontal cortex (OFC), anterior cingulate cortex (ACC), and dorsolateral prefrontal cortex (DLPFC). This process is assisted by subcortical structures including the amygdala, thalamus, and cerebellum. Findings to date show that both cortical and subcortical brain regions contribute to the decision-making process. The neural basis of decision-making is a complex neural network of cortico-cortical and cortico-subcortical connections which includes subareas of the PFC, limbic structures, and the cerebellum. Copyright © 2014 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.

  12. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  13. Neural network recognition of mammographic lesions

    International Nuclear Information System (INIS)

    Oldham, W.J.B.; Downes, P.T.; Hunter, V.

    1987-01-01

    A method for recognition of mammographic lesions through the use of neural networks is presented. Neural networks have exhibited the ability to learn the shape andinternal structure of patterns. Digitized mammograms containing circumscribed and stelate lesions were used to train a feedfoward synchronous neural network that self-organizes to stable attractor states. Encoding of data for submission to the network was accomplished by performing a fractal analysis of the digitized image. This results in scale invariant representation of the lesions. Results are discussed

  14. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  15. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)

    2006-07-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  16. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    International Nuclear Information System (INIS)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R.

    2006-01-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  17. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  18. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  19. Distributed Recurrent Neural Forward Models with Neural Control for Complex Locomotion in Walking Robots

    DEFF Research Database (Denmark)

    Dasgupta, Sakyasingha; Goldschmidt, Dennis; Wörgötter, Florentin

    2015-01-01

    here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated......Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental...... conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain...

  20. In-vitro differentiation induction of neural stem cells

    NARCIS (Netherlands)

    Balasubramaniyan, Veerakumar

    2006-01-01

    Neurale stamcellen maken de drie belangrijkste celtypes van ons zenuwstelsel aan. Veerakumar Balasubramaniyan onderzocht hoe neurale stamcellen kunnen worden aangezet tot het aanmaken van specifieke neurale celtypes. Met behulp van genetische technieken lukte het hem oligodendrocyten te verkrijgen:

  1. Neural entrainment to the rhythmic structure of music.

    Science.gov (United States)

    Tierney, Adam; Kraus, Nina

    2015-02-01

    The neural resonance theory of musical meter explains musical beat tracking as the result of entrainment of neural oscillations to the beat frequency and its higher harmonics. This theory has gained empirical support from experiments using simple, abstract stimuli. However, to date there has been no empirical evidence for a role of neural entrainment in the perception of the beat of ecologically valid music. Here we presented participants with a single pop song with a superimposed bassoon sound. This stimulus was either lined up with the beat of the music or shifted away from the beat by 25% of the average interbeat interval. Both conditions elicited a neural response at the beat frequency. However, although the on-the-beat condition elicited a clear response at the first harmonic of the beat, this frequency was absent in the neural response to the off-the-beat condition. These results support a role for neural entrainment in tracking the metrical structure of real music and show that neural meter tracking can be disrupted by the presentation of contradictory rhythmic cues.

  2. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  3. Intelligent neural network diagnostic system

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2010-01-01

    Recently, artificial neural network (ANN) has made a significant mark in the domain of diagnostic applications. Neural networks are used to implement complex non-linear mappings (functions) using simple elementary units interrelated through connections with adaptive weights. The performance of the ANN is mainly depending on their topology structure and weights. Some systems have been developed using genetic algorithm (GA) to optimize the topology of the ANN. But, they suffer from some limitations. They are : (1) The computation time requires for training the ANN several time reaching for the average weight required, (2) Slowness of GA for optimization process and (3) Fitness noise appeared in the optimization of ANN. This research suggests new issues to overcome these limitations for finding optimal neural network architectures to learn particular problems. This proposed methodology is used to develop a diagnostic neural network system. It has been applied for a 600 MW turbo-generator as a case of real complex systems. The proposed system has proved its significant performance compared to two common methods used in the diagnostic applications.

  4. Diplegia facial traumatica

    Directory of Open Access Journals (Sweden)

    J. Fortes-Rego

    1975-12-01

    Full Text Available É relatado um caso de paralisia facial bilateral, incompleta, associada a hipoacusia esquerda, após traumatismo cranioencefálico, com fraturas evidenciadas radiológicamente. Algumas considerações são formuladas tentando relacionar ditas manifestações com fraturas do osso temporal.

  5. Neural Networks for the Beginner.

    Science.gov (United States)

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  6. Recent Advances in Neural Recording Microsystems

    Directory of Open Access Journals (Sweden)

    Benoit Gosselin

    2011-04-01

    Full Text Available The accelerating pace of research in neuroscience has created a considerable demand for neural interfacing microsystems capable of monitoring the activity of large groups of neurons. These emerging tools have revealed a tremendous potential for the advancement of knowledge in brain research and for the development of useful clinical applications. They can extract the relevant control signals directly from the brain enabling individuals with severe disabilities to communicate their intentions to other devices, like computers or various prostheses. Such microsystems are self-contained devices composed of a neural probe attached with an integrated circuit for extracting neural signals from multiple channels, and transferring the data outside the body. The greatest challenge facing development of such emerging devices into viable clinical systems involves addressing their small form factor and low-power consumption constraints, while providing superior resolution. In this paper, we survey the recent progress in the design and the implementation of multi-channel neural recording Microsystems, with particular emphasis on the design of recording and telemetry electronics. An overview of the numerous neural signal modalities is given and the existing microsystem topologies are covered. We present energy-efficient sensory circuits to retrieve weak signals from neural probes and we compare them. We cover data management and smart power scheduling approaches, and we review advances in low-power telemetry. Finally, we conclude by summarizing the remaining challenges and by highlighting the emerging trends in the field.

  7. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  8. Neural neworks in a management information systems

    Directory of Open Access Journals (Sweden)

    Jana Weinlichová

    2009-01-01

    Full Text Available For having retrospection for all over the data which are used, analyzed, evaluated and for a future incident predictions are used Management Information Systems and Business Intelligence. In case of not to be able to apply standard methods of data processing there can be with benefit applied an Artificial Intelligence. In this article will be referred to proofed abilities of Neural Networks. The Neural Networks is supported by many software products related to provide effective solution of manager issues. Those products are given as primary support for manager issues solving. We were tried to find reciprocally between products using Neural Networks and between Management Information Systems for finding a real possibility of applying Neural Networks as a direct part of Management Information Systems (MIS. In the article are presented possibilities to apply Neural Networks on different types of tasks in MIS.

  9. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  10. Optimal neural computations require analog processors

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper discusses some of the limitations of hardware implementations of neural networks. The authors start by presenting neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural networks. Further, the focus will be on hardware imposed constraints. They will present recent results for three different alternatives of parallel implementations of neural networks: digital circuits, threshold gate circuits, and analog circuits. The area and the delay will be related to the neurons` fan-in and to the precision of their synaptic weights. The main conclusion is that hardware-efficient solutions require analog computations, and suggests the following two alternatives: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow the use of the third dimension (e.g. using optical interconnections).

  11. Learning from neural control.

    Science.gov (United States)

    Wang, Cong; Hill, David J

    2006-01-01

    One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach.

  12. Sacred or Neural?

    DEFF Research Database (Denmark)

    Runehov, Anne Leona Cesarine

    Are religious spiritual experiences merely the product of the human nervous system? Anne L.C. Runehov investigates the potential of contemporary neuroscience to explain religious experiences. Following the footsteps of Michael Persinger, Andrew Newberg and Eugene d'Aquili she defines...... the terminological bounderies of "religious experiences" and explores the relevant criteria for the proper evaluation of scientific research, with a particular focus on the validity of reductionist models. Runehov's theis is that the perspectives looked at do not necessarily exclude each other but can be merged....... The question "sacred or neural?" becomes a statement "sacred and neural". The synergies thus produced provide manifold opportunities for interdisciplinary dialogue and research....

  13. On the neural mechanisms subserving consciousness and attention

    Directory of Open Access Journals (Sweden)

    Catherine eTallon-Baudry

    2012-01-01

    Full Text Available Consciousness, as described in the experimental literature, is a multi-faceted phenomenon, that impinges on other well-studied concepts such as attention and control. Do consciousness and attention refer to different aspects of the same core phenomenon, or do they correspond to distinct functions? One possibility to address this question is to examine the neural mechanisms underlying consciousness and attention. If consciousness and attention pertain to the same concept, they should rely on shared neural mechanisms. Conversely, if their underlying mechanisms are distinct, then consciousness and attention should be considered as distinct entities. This paper therefore reviews neurophysiological facts arguing in favor or against a tight relationship between consciousness and attention. Three neural mechanisms that have been associated with both attention and consciousness are examined (neural amplification, involvement of the fronto-parietal network, and oscillatory synchrony, to conclude that the commonalities between attention and consciousness at the neural level may have been overestimated. Last but not least, experiments in which both attention and consciousness were probed at the neural level point toward a dissociation between the two concepts. It therefore appears from this review that consciousness and attention rely on distinct neural properties, although they can interact at the behavioral level. It is proposed that a "cumulative influence model", in which attention and consciousness correspond to distinct neural mechanisms feeding a single decisional process leading to behavior, fits best with available neural and behavioral data. In this view, consciousness should not be considered as a top-level executive function but should rather be defined by its experiential properties.

  14. Adaptive nonlinear control using input normalized neural networks

    International Nuclear Information System (INIS)

    Leeghim, Henzeh; Seo, In Ho; Bang, Hyo Choong

    2008-01-01

    An adaptive feedback linearization technique combined with the neural network is addressed to control uncertain nonlinear systems. The neural network-based adaptive control theory has been widely studied. However, the stability analysis of the closed-loop system with the neural network is rather complicated and difficult to understand, and sometimes unnecessary assumptions are involved. As a result, unnecessary assumptions for stability analysis are avoided by using the neural network with input normalization technique. The ultimate boundedness of the tracking error is simply proved by the Lyapunov stability theory. A new simple update law as an adaptive nonlinear control is derived by the simplification of the input normalized neural network assuming the variation of the uncertain term is sufficiently small

  15. Antenna analysis using neural networks

    Science.gov (United States)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  16. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  17. Efficient computation in adaptive artificial spiking neural networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); R.B.P. Nusselder (Roeland); H.S. Scholte; S.M. Bohte (Sander)

    2017-01-01

    textabstractArtificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of

  18. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  19. Interpretable neural networks with BP-SOM

    NARCIS (Netherlands)

    Weijters, A.J.M.M.; Bosch, van den A.P.J.; Pobil, del A.P.; Mira, J.; Ali, M.

    1998-01-01

    Artificial Neural Networks (ANNS) are used successfully in industry and commerce. This is not surprising since neural networks are especially competitive for complex tasks for which insufficient domain-specific knowledge is available. However, interpretation of models induced by ANNS is often

  20. A high-speed analog neural processor

    NARCIS (Netherlands)

    Masa, P.; Masa, Peter; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    Targeted at high-energy physics research applications, our special-purpose analog neural processor can classify up to 70 dimensional vectors within 50 nanoseconds. The decision-making process of the implemented feedforward neural network enables this type of computation to tolerate weight

  1. Neural cryptography with feedback.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Shacham, Lanir; Kanter, Ido

    2004-04-01

    Neural cryptography is based on a competition between attractive and repulsive stochastic forces. A feedback mechanism is added to neural cryptography which increases the repulsive forces. Using numerical simulations and an analytic approach, the probability of a successful attack is calculated for different model parameters. Scaling laws are derived which show that feedback improves the security of the system. In addition, a network with feedback generates a pseudorandom bit sequence which can be used to encrypt and decrypt a secret message.

  2. Requirement of mouse BCCIP for neural development and progenitor proliferation.

    Directory of Open Access Journals (Sweden)

    Yi-Yuan Huang

    Full Text Available Multiple DNA repair pathways are involved in the orderly development of neural systems at distinct stages. The homologous recombination (HR pathway is required to resolve stalled replication forks and critical for the proliferation of progenitor cells during neural development. BCCIP is a BRCA2 and CDKN1A interacting protein implicated in HR and inhibition of DNA replication stress. In this study, we determined the role of BCCIP in neural development using a conditional BCCIP knock-down mouse model. BCCIP deficiency impaired embryonic and postnatal neural development, causing severe ataxia, cerebral and cerebellar defects, and microcephaly. These development defects are associated with spontaneous DNA damage and subsequent cell death in the proliferative cell populations of the neural system during embryogenesis. With in vitro neural spheroid cultures, BCCIP deficiency impaired neural progenitor's self-renewal capability, and spontaneously activated p53. These data suggest that BCCIP and its anti-replication stress functions are essential for normal neural development by maintaining an orderly proliferation of neural progenitors.

  3. Neural crest cells: from developmental biology to clinical interventions.

    Science.gov (United States)

    Noisa, Parinya; Raivio, Taneli

    2014-09-01

    Neural crest cells are multipotent cells, which are specified in embryonic ectoderm in the border of neural plate and epiderm during early development by interconnection of extrinsic stimuli and intrinsic factors. Neural crest cells are capable of differentiating into various somatic cell types, including melanocytes, craniofacial cartilage and bone, smooth muscle, and peripheral nervous cells, which supports their promise for cell therapy. In this work, we provide a comprehensive review of wide aspects of neural crest cells from their developmental biology to applicability in medical research. We provide a simplified model of neural crest cell development and highlight the key external stimuli and intrinsic regulators that determine the neural crest cell fate. Defects of neural crest cell development leading to several human disorders are also mentioned, with the emphasis of using human induced pluripotent stem cells to model neurocristopathic syndromes. © 2014 Wiley Periodicals, Inc.

  4. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  5. A fuzzy neural network for sensor signal estimation

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2000-01-01

    In this work, a fuzzy neural network is used to estimate the relevant sensor signal using other sensor signals. Noise components in input signals into the fuzzy neural network are removed through the wavelet denoising technique. Principal component analysis (PCA) is used to reduce the dimension of an input space without losing a significant amount of information. A lower dimensional input space will also usually reduce the time necessary to train a fuzzy-neural network. Also, the principal component analysis makes easy the selection of the input signals into the fuzzy neural network. The fuzzy neural network parameters are optimized by two learning methods. A genetic algorithm is used to optimize the antecedent parameters of the fuzzy neural network and a least-squares algorithm is used to solve the consequent parameters. The proposed algorithm was verified through the application to the pressurizer water level and the hot-leg flowrate measurements in pressurized water reactors

  6. Neural networks within multi-core optic fibers.

    Science.gov (United States)

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  7. Implantable Neural Interfaces for Sharks

    Science.gov (United States)

    2007-05-01

    technology for recording and stimulating from the auditory and olfactory sensory nervous systems of the awake, swimming nurse shark , G. cirratum (Figures...overlay of the central nervous system of the nurse shark on a horizontal MR image. Implantable Neural Interfaces for Sharks ...Neural Interfaces for Characterizing Population Responses to Odorants and Electrical Stimuli in the Nurse Shark , Ginglymostoma cirratum.” AChemS Abs

  8. Introduction to Concepts in Artificial Neural Networks

    Science.gov (United States)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  9. Direct adaptive control using feedforward neural networks

    OpenAIRE

    Cajueiro, Daniel Oliveira; Hemerly, Elder Moreira

    2003-01-01

    ABSTRACT: This paper proposes a new scheme for direct neural adaptive control that works efficiently employing only one neural network, used for simultaneously identifying and controlling the plant. The idea behind this structure of adaptive control is to compensate the control input obtained by a conventional feedback controller. The neural network training process is carried out by using two different techniques: backpropagation and extended Kalman filter algorithm. Additionally, the conver...

  10. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used. In this study, we evaluated the performance of these neural networks on three established bench mark time series prediction problems. Results from the experiments showed that Jordan neural network performed significantly ...

  11. The Challenges of Neural Mind-reading Paradigms

    Directory of Open Access Journals (Sweden)

    Oscar eVilarroya

    2013-06-01

    Full Text Available Neural mind-reading studies, based on multivariate pattern analysis (MVPA methods, are providing exciting new studies. Some of the results obtained with these paradigms have raised high expectations, such as the possibility of creating brain reading devices. However, such hopes are based on the assumptions that: a the BOLD signal is a marker of neural activity; b the BOLD pattern identified by a MVPA is a neurally sound pattern; c the MVPA’s feature space is a good mapping of the neural representation of a stimulus, and d the pattern identified by a MVPA corresponds to a representation. I examine here the challenges that still have to be met before fully accepting such assumptions.

  12. The challenges of neural mind-reading paradigms.

    Science.gov (United States)

    Vilarroya, Oscar

    2013-01-01

    Neural mind-reading studies, based on multivariate pattern analysis (MVPA) methods, are providing exciting new studies. Some of the results obtained with these paradigms have raised high expectations, such as the possibility of creating brain reading devices. However, such hopes are based on the assumptions that: (a) the BOLD signal is a marker of neural activity; (b) the BOLD pattern identified by a MVPA is a neurally sound pattern; (c) the MVPA's feature space is a good mapping of the neural representation of a stimulus, and (d) the pattern identified by a MVPA corresponds to a representation. I examine here the challenges that still have to be met before fully accepting such assumptions.

  13. Inversion of a lateral log using neural networks

    International Nuclear Information System (INIS)

    Garcia, G.; Whitman, W.W.

    1992-01-01

    In this paper a technique using neural networks is demonstrated for the inversion of a lateral log. The lateral log is simulated by a finite difference method which in turn is used as an input to a backpropagation neural network. An initial guess earth model is generated from the neural network, which is then input to a Marquardt inversion. The neural network reacts to gross and subtle data features in actual logs and produces a response inferred from the knowledge stored in the network during a training process. The neural network inversion of lateral logs is tested on synthetic and field data. Tests using field data resulted in a final earth model whose simulated lateral is in good agreement with the actual log data

  14. Neural network modeling for near wall turbulent flow

    International Nuclear Information System (INIS)

    Milano, Michele; Koumoutsakos, Petros

    2002-01-01

    A neural network methodology is developed in order to reconstruct the near wall field in a turbulent flow by exploiting flow fields provided by direct numerical simulations. The results obtained from the neural network methodology are compared with the results obtained from prediction and reconstruction using proper orthogonal decomposition (POD). Using the property that the POD is equivalent to a specific linear neural network, a nonlinear neural network extension is presented. It is shown that for a relatively small additional computational cost nonlinear neural networks provide us with improved reconstruction and prediction capabilities for the near wall velocity fields. Based on these results advantages and drawbacks of both approaches are discussed with an outlook toward the development of near wall models for turbulence modeling and control

  15. Convolutional over Recurrent Encoder for Neural Machine Translation

    Directory of Open Access Journals (Sweden)

    Dakwale Praveen

    2017-06-01

    Full Text Available Neural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN. In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.

  16. Comparison of 2D and 3D neural induction methods for the generation of neural progenitor cells from human induced pluripotent stem cells

    DEFF Research Database (Denmark)

    Chandrasekaran, Abinaya; Avci, Hasan; Ochalek, Anna

    2017-01-01

    Neural progenitor cells (NPCs) from human induced pluripotent stem cells (hiPSCs) are frequently induced using 3D culture methodologies however, it is unknown whether spheroid-based (3D) neural induction is actually superior to monolayer (2D) neural induction. Our aim was to compare the efficiency......), cortical layer (TBR1, CUX1) and glial markers (SOX9, GFAP, AQP4). Electron microscopy demonstrated that both methods resulted in morphologically similar neural rosettes. However, quantification of NPCs derived from 3D neural induction exhibited an increase in the number of PAX6/NESTIN double positive cells...... the electrophysiological properties between the two induction methods. In conclusion, 3D neural induction increases the yield of PAX6+/NESTIN+ cells and gives rise to neurons with longer neurites, which might be an advantage for the production of forebrain cortical neurons, highlighting the potential of 3D neural...

  17. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  18. Application of neural networks in CRM systems

    Directory of Open Access Journals (Sweden)

    Bojanowska Agnieszka

    2017-01-01

    Full Text Available The central aim of this study is to investigate how to apply artificial neural networks in Customer Relationship Management (CRM. The paper presents several business applications of neural networks in software systems designed to aid CRM, e.g. in deciding on the profitability of building a relationship with a given customer. Furthermore, a framework for a neural-network based CRM software tool is developed. Building beneficial relationships with customers is generating considerable interest among various businesses, and is often mentioned as one of the crucial objectives of enterprises, next to their key aim: to bring satisfactory profit. There is a growing tendency among businesses to invest in CRM systems, which together with an organisational culture of a company aid managing customer relationships. It is the sheer amount of gathered data as well as the need for constant updating and analysis of this breadth of information that may imply the suitability of neural networks for the application in question. Neural networks exhibit considerably higher computational capabilities than sequential calculations because the solution to a problem is obtained without the need for developing a special algorithm. In the majority of presented CRM applications neural networks constitute and are presented as a managerial decision-taking optimisation tool.

  19. Serotonin, neural markers and memory

    Directory of Open Access Journals (Sweden)

    Alfredo eMeneses

    2015-07-01

    Full Text Available Diverse neuropsychiatric disorders present dysfunctional memory and no effective treatment exits for them; likely as result of the absence of neural markers associated to memory. Neurotransmitter systems and signaling pathways have been implicated in memory and dysfunctional memory; however, their role is poorly understood. Hence, neural markers and cerebral functions and dysfunctions are revised. To our knowledge no previous systematic works have been published addressing these issues. The interactions among behavioral tasks, control groups and molecular changes and/or pharmacological effects are mentioned. Neurotransmitter receptors and signaling pathways, during normal and abnormally functioning memory with an emphasis on the behavioral aspects of memory are revised. With focus on serotonin, since as it is a well characterized neurotransmitter, with multiple pharmacological tools, and well characterized downstream signaling in mammals’ species. 5-HT1A, 5-HT4, 5-HT5, 5-HT6 and 5-HT7 receptors as well as SERT (serotonin transporter seem to be useful neural markers and/or therapeutic targets. Certainly, if the mentioned evidence is replicated, then the translatability from preclinical and clinical studies to neural changes might be confirmed. Hypothesis and theories might provide appropriate limits and perspectives of evidence

  20. Vestibular hearing and neural synchronization.

    Science.gov (United States)

    Emami, Seyede Faranak; Daneshi, Ahmad

    2012-01-01

    Objectives. Vestibular hearing as an auditory sensitivity of the saccule in the human ear is revealed by cervical vestibular evoked myogenic potentials (cVEMPs). The range of the vestibular hearing lies in the low frequency. Also, the amplitude of an auditory brainstem response component depends on the amount of synchronized neural activity, and the auditory nerve fibers' responses have the best synchronization with the low frequency. Thus, the aim of this study was to investigate correlation between vestibular hearing using cVEMPs and neural synchronization via slow wave Auditory Brainstem Responses (sABR). Study Design. This case-control survey was consisted of twenty-two dizzy patients, compared to twenty healthy controls. Methods. Intervention comprised of Pure Tone Audiometry (PTA), Impedance acoustic metry (IA), Videonystagmography (VNG), fast wave ABR (fABR), sABR, and cVEMPs. Results. The affected ears of the dizzy patients had the abnormal findings of cVEMPs (insecure vestibular hearing) and the abnormal findings of sABR (decreased neural synchronization). Comparison of the cVEMPs at affected ears versus unaffected ears and the normal persons revealed significant differences (P < 0.05). Conclusion. Safe vestibular hearing was effective in the improvement of the neural synchronization.

  1. Unjoined primary and secondary neural tubes: junctional neural tube defect, a new form of spinal dysraphism caused by disturbance of junctional neurulation.

    Science.gov (United States)

    Eibach, Sebastian; Moes, Greg; Hou, Yong Jin; Zovickian, John; Pang, Dachling

    2017-10-01

    Primary and secondary neurulation are the two known processes that form the central neuraxis of vertebrates. Human phenotypes of neural tube defects (NTDs) mostly fall into two corresponding categories consistent with the two types of developmental sequence: primary NTD features an open skin defect, an exposed, unclosed neural plate (hence an open neural tube defect, or ONTD), and an unformed or poorly formed secondary neural tube, and secondary NTD with no skin abnormality (hence a closed NTD) and a malformed conus caudal to a well-developed primary neural tube. We encountered three cases of a previously unrecorded form of spinal dysraphism in which the primary and secondary neural tubes are individually formed but are physically separated far apart and functionally disconnected from each other. One patient was operated on, in whom both the lumbosacral spinal cord from primary neurulation and the conus from secondary neurulation are each anatomically complete and endowed with functioning segmental motor roots tested by intraoperative triggered electromyography and direct spinal cord stimulation. The remarkable feature is that the two neural tubes are unjoined except by a functionally inert, probably non-neural band. The developmental error of this peculiar malformation probably occurs during the critical transition between the end of primary and the beginning of secondary neurulation, in a stage aptly called junctional neurulation. We describe the current knowledge concerning junctional neurulation and speculate on the embryogenesis of this new class of spinal dysraphism, which we call junctional neural tube defect.

  2. The neural signature of emotional memories in serial crimes.

    Science.gov (United States)

    Chassy, Philippe

    2017-10-01

    Neural plasticity is the process whereby semantic information and emotional responses are stored in neural networks. It is hypothesized that the neural networks built over time to encode the sexual fantasies that motivate serial killers to act should display a unique, detectable activation pattern. The pathological neural watermark hypothesis posits that such networks comprise activation of brain sites that reflect four cognitive components: autobiographical memory, sexual arousal, aggression, and control over aggression. The neural sites performing these cognitive functions have been successfully identified by previous research. The key findings are reviewed to hypothesise the typical pattern of activity that serial killers should display. Through the integration of biological findings into one framework, the neural approach proposed in this paper is in stark contrast with the many theories accounting for serial killers that offer non-medical taxonomies. The pathological neural watermark hypothesis offers a new framework to understand and detect deviant individuals. The technical and legal issues are briefly discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Enhancing neural-network performance via assortativity

    International Nuclear Information System (INIS)

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-01-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  4. Permutation parity machines for neural synchronization

    International Nuclear Information System (INIS)

    Reyes, O M; Kopitzke, I; Zimmermann, K-H

    2009-01-01

    Synchronization of neural networks has been studied in recent years as an alternative to cryptographic applications such as the realization of symmetric key exchange protocols. This paper presents a first view of the so-called permutation parity machine, an artificial neural network proposed as a binary variant of the tree parity machine. The dynamics of the synchronization process by mutual learning between permutation parity machines is analytically studied and the results are compared with those of tree parity machines. It will turn out that for neural synchronization, permutation parity machines form a viable alternative to tree parity machines

  5. Neural codes of seeing architectural styles.

    Science.gov (United States)

    Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B

    2017-01-10

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.

  6. Mode Choice Modeling Using Artificial Neural Networks

    OpenAIRE

    Edara, Praveen Kumar

    2003-01-01

    Artificial intelligence techniques have produced excellent results in many diverse fields of engineering. Techniques such as neural networks and fuzzy systems have found their way into transportation engineering. In recent years, neural networks are being used instead of regression techniques for travel demand forecasting purposes. The basic reason lies in the fact that neural networks are able to capture complex relationships and learn from examples and also able to adapt when new data becom...

  7. Neural network and its application to CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  8. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  9. Single-site neural tube closure in human embryos revisited.

    Science.gov (United States)

    de Bakker, Bernadette S; Driessen, Stan; Boukens, Bastiaan J D; van den Hoff, Maurice J B; Oostra, Roelof-Jan

    2017-10-01

    Since the multi-site closure theory was first proposed in 1991 as explanation for the preferential localizations of neural tube defects, the closure of the neural tube has been debated. Although the multi-site closure theory is much cited in clinical literature, single-site closure is most apparent in literature concerning embryology. Inspired by Victor Hamburgers (1900-2001) statement that "our real teacher has been and still is the embryo, who is, incidentally, the only teacher who is always right", we decided to critically review both theories of neural tube closure. To verify the theories of closure, we studied serial histological sections of 10 mouse embryos between 8.5 and 9.5 days of gestation and 18 human embryos of the Carnegie collection between Carnegie stage 9 (19-21 days) and 13 (28-32 days). Neural tube closure was histologically defined by the neuroepithelial remodeling of the two adjoining neural fold tips in the midline. We did not observe multiple fusion sites in neither mouse nor human embryos. A meta-analysis of case reports on neural tube defects showed that defects can occur at any level of the neural axis. Our data indicate that the human neural tube fuses at a single site and, therefore, we propose to reinstate the single-site closure theory for neural tube closure. We showed that neural tube defects are not restricted to a specific location, thereby refuting the reasoning underlying the multi-site closure theory. Clin. Anat. 30:988-999, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Neural decoding of collective wisdom with multi-brain computing.

    Science.gov (United States)

    Eckstein, Miguel P; Das, Koel; Pham, Binh T; Peterson, Matthew F; Abbey, Craig K; Sy, Jocelyn L; Giesbrecht, Barry

    2012-01-02

    Group decisions and even aggregation of multiple opinions lead to greater decision accuracy, a phenomenon known as collective wisdom. Little is known about the neural basis of collective wisdom and whether its benefits arise in late decision stages or in early sensory coding. Here, we use electroencephalography and multi-brain computing with twenty humans making perceptual decisions to show that combining neural activity across brains increases decision accuracy paralleling the improvements shown by aggregating the observers' opinions. Although the largest gains result from an optimal linear combination of neural decision variables across brains, a simpler neural majority decision rule, ubiquitous in human behavior, results in substantial benefits. In contrast, an extreme neural response rule, akin to a group following the most extreme opinion, results in the least improvement with group size. Analyses controlling for number of electrodes and time-points while increasing number of brains demonstrate unique benefits arising from integrating neural activity across different brains. The benefits of multi-brain integration are present in neural activity as early as 200 ms after stimulus presentation in lateral occipital sites and no additional benefits arise in decision related neural activity. Sensory-related neural activity can predict collective choices reached by aggregating individual opinions, voting results, and decision confidence as accurately as neural activity related to decision components. Estimation of the potential for the collective to execute fast decisions by combining information across numerous brains, a strategy prevalent in many animals, shows large time-savings. Together, the findings suggest that for perceptual decisions the neural activity supporting collective wisdom and decisions arises in early sensory stages and that many properties of collective cognition are explainable by the neural coding of information across multiple brains. Finally

  11. Neural network signal understanding for instrumentation

    DEFF Research Database (Denmark)

    Pau, L. F.; Johansen, F. S.

    1990-01-01

    understanding research is surveyed, and the selected implementation and its performance in terms of correct classification rates and robustness to noise are described. Formal results on neural net training time and sensitivity to weights are given. A theory for neural control using functional link nets is given...

  12. Neural Control of the Immune System

    Science.gov (United States)

    Sundman, Eva; Olofsson, Peder S.

    2014-01-01

    Neural reflexes support homeostasis by modulating the function of organ systems. Recent advances in neuroscience and immunology have revealed that neural reflexes also regulate the immune system. Activation of the vagus nerve modulates leukocyte cytokine production and alleviates experimental shock and autoimmune disease, and recent data have…

  13. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  14. Artificial neural networks in neutron dosimetry

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A.; Gallego, E.; Lorente, A.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the χ 2 - test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  15. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  16. Neural correlates of HIV risk feelings.

    Science.gov (United States)

    Häcker, Frank E K; Schmälzle, Ralf; Renner, Britta; Schupp, Harald T

    2015-04-01

    Field studies on HIV risk perception suggest that people rely on impressions they have about the safety of their partner. The present fMRI study investigated the neural correlates of the intuitive perception of risk. First, during an implicit condition, participants viewed a series of unacquainted persons and performed a task unrelated to HIV risk. In the following explicit condition, participants evaluated the HIV risk for each presented person. Contrasting responses for high and low HIV risk revealed that risky stimuli evoked enhanced activity in the anterior insula and medial prefrontal regions, which are involved in salience processing and frequently activated by threatening and negative affect-related stimuli. Importantly, neural regions responding to explicit HIV risk judgments were also enhanced in the implicit condition, suggesting a neural mechanism for intuitive impressions of riskiness. Overall, these findings suggest the saliency network as neural correlate for the intuitive sensing of risk. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  17. Artificial neural networks for plasma spectroscopy analysis

    International Nuclear Information System (INIS)

    Morgan, W.L.; Larsen, J.T.; Goldstein, W.H.

    1992-01-01

    Artificial neural networks have been applied to a variety of signal processing and image recognition problems. Of the several common neural models the feed-forward, back-propagation network is well suited for the analysis of scientific laboratory data, which can be viewed as a pattern recognition problem. The authors present a discussion of the basic neural network concepts and illustrate its potential for analysis of experiments by applying it to the spectra of laser produced plasmas in order to obtain estimates of electron temperatures and densities. Although these are high temperature and density plasmas, the neural network technique may be of interest in the analysis of the low temperature and density plasmas characteristic of experiments and devices in gaseous electronics

  18. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  19. Chitosan derived co-spheroids of neural stem cells and mesenchymal stem cells for neural regeneration.

    Science.gov (United States)

    Han, Hao-Wei; Hsu, Shan-Hui

    2017-10-01

    Chitosan has been considered as candidate biomaterials for neural applications. The effective treatment of neurodegeneration or injury to the central nervous system (CNS) is still in lack nowadays. Adult neural stem cells (NSCs) represents a promising cell source to treat the CNS diseases but they are limited in number. Here, we developed the core-shell spheroids of NSCs (shell) and mesenchymal stem cells (MSCs, core) by co-culturing cells on the chitosan surface. The NSCs in chitosan derived co-spheroids displayed a higher survival rate than those in NSC homo-spheroids. The direct interaction of NSCs with MSCs in the co-spheroids increased the Notch activity and differentiation tendency of NSCs. Meanwhile, the differentiation potential of MSCs in chitosan derived co-spheroids was significantly enhanced toward neural lineages. Furthermore, NSC homo-spheroids and NSC/MSC co-spheroids derived on chitosan were evaluated for their in vivo efficacy by the embryonic and adult zebrafish brain injury models. The locomotion activity of zebrafish receiving chitosan derived NSC homo-spheroids or NSC/MSC co-spheroids was partially rescued in both models. Meanwhile, the higher survival rate was observed in the group of adult zebrafish implanted with chitosan derived NSC/MSC co-spheroids as compared to NSC homo-spheroids. These evidences indicate that chitosan may provide an extracellular matrix-like environment to drive the interaction and the morphological assembly between NSCs and MSCs and promote their neural differentiation capacities, which can be used for neural regeneration. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  1. The quest for a Quantum Neural Network

    OpenAIRE

    Schuld, M.; Sinayskiy, I.; Petruccione, F.

    2014-01-01

    With the overwhelming success in the field of quantum information in the last decades, the "quest" for a Quantum Neural Network (QNN) model began in order to combine quantum computing with the striking properties of neural computing. This article presents a systematic approach to QNN research, which so far consists of a conglomeration of ideas and proposals. It outlines the challenge of combining the nonlinear, dissipative dynamics of neural computing and the linear, unitary dynamics of quant...

  2. Neural Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — As part of the Electrical and Computer Engineering Department and The Institute for System Research, the Neural Systems Laboratory studies the functionality of the...

  3. The Neural Basis of and a Common Neural Circuitry in Different Types of Pro-social Behavior

    Directory of Open Access Journals (Sweden)

    Jun Luo

    2018-06-01

    Full Text Available Pro-social behaviors are voluntary behaviors that benefit other people or society as a whole, such as charitable donations, cooperation, trust, altruistic punishment, and fairness. These behaviors have been widely described through non self-interest decision-making in behavioral experimental studies and are thought to be increased by social preference motives. Importantly, recent studies using a combination of neuroimaging and brain stimulation, designed to reveal the neural mechanisms of pro-social behaviors, have found that a wide range of brain areas, specifically the prefrontal cortex, anterior insula, anterior cingulate cortex, and amygdala, are correlated or causally related with pro-social behaviors. In this review, we summarize the research on the neural basis of various kinds of pro-social behaviors and describe a common shared neural circuitry of these pro-social behaviors. We introduce several general ways in which experimental economics and neuroscience can be combined to develop important contributions to understanding social decision-making and pro-social behaviors. Future research should attempt to explore the neural circuitry between the frontal lobes and deeper brain areas.

  4. Neural crest stem cell multipotency requires Foxd3 to maintain neural potential and repress mesenchymal fates.

    Science.gov (United States)

    Mundell, Nathan A; Labosky, Patricia A

    2011-02-01

    Neural crest (NC) progenitors generate a wide array of cell types, yet molecules controlling NC multipotency and self-renewal and factors mediating cell-intrinsic distinctions between multipotent versus fate-restricted progenitors are poorly understood. Our earlier work demonstrated that Foxd3 is required for maintenance of NC progenitors in the embryo. Here, we show that Foxd3 mediates a fate restriction choice for multipotent NC progenitors with loss of Foxd3 biasing NC toward a mesenchymal fate. Neural derivatives of NC were lost in Foxd3 mutant mouse embryos, whereas abnormally fated NC-derived vascular smooth muscle cells were ectopically located in the aorta. Cranial NC defects were associated with precocious differentiation towards osteoblast and chondrocyte cell fates, and individual mutant NC from different anteroposterior regions underwent fate changes, losing neural and increasing myofibroblast potential. Our results demonstrate that neural potential can be separated from NC multipotency by the action of a single gene, and establish novel parallels between NC and other progenitor populations that depend on this functionally conserved stem cell protein to regulate self-renewal and multipotency.

  5. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  6. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  7. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  8. Inverting radiometric measurements with a neural network

    Science.gov (United States)

    Measure, Edward M.; Yee, Young P.; Balding, Jeff M.; Watkins, Wendell R.

    1992-02-01

    A neural network scheme for retrieving remotely sensed vertical temperature profiles was applied to observed ground based radiometer measurements. The neural network used microwave radiance measurements and surface measurements of temperature and pressure as inputs. Because the microwave radiometer is capable of measuring 4 oxygen channels at 5 different elevation angles (9, 15, 25, 40, and 90 degs), 20 microwave measurements are potentially available. Because these measurements have considerable redundancy, a neural network was experimented with, accepting as inputs microwave measurements taken at 53.88 GHz, 40 deg; 57.45 GHz, 40 deg; and 57.45, 90 deg. The primary test site was located at White Sands Missile Range (WSMR), NM. Results are compared with measurements made simultaneously with balloon borne radiosonde instruments and with radiometric temperature retrievals made using more conventional retrieval algorithms. The neural network was trained using a Widrow-Hoff delta rule procedure. Functions of date to include season dependence in the retrieval process and functions of time to include diurnal effects were used as inputs to the neural network.

  9. The principles of artificial neural network information processing

    International Nuclear Information System (INIS)

    Dai, Ru-Wei

    1993-01-01

    In this article, the basic structure of an artificial neuron is first introduced. In addition, principles of artificial neural network as well as several important artificial neural models such as Perceptron, Back propagation model, Hopfield net, and ART model are briefly discussed and analyzed. Finally, the application of artificial neural network for Chinese Character Recognition is also given. (author)

  10. The principles of artificial neural network information processing

    International Nuclear Information System (INIS)

    Dai, Ru-Wei

    1993-01-01

    In this article, the basic structure of an artificial neuron is first introduced. In addition, principles of artificial neural network as well as several important artificial neural models such as perception, back propagation model, Hopfield net, and ART model are briefly discussed and analyzed. Finally the application of artificial neural network for Chinese character recognition is also given. (author)

  11. Genetic learning in rule-based and neural systems

    Science.gov (United States)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  12. Neural complexity: A graph theoretic interpretation

    Science.gov (United States)

    Barnett, L.; Buckley, C. L.; Bullock, S.

    2011-04-01

    One of the central challenges facing modern neuroscience is to explain the ability of the nervous system to coherently integrate information across distinct functional modules in the absence of a central executive. To this end, Tononi [Proc. Natl. Acad. Sci. USA.PNASA60027-842410.1073/pnas.91.11.5033 91, 5033 (1994)] proposed a measure of neural complexity that purports to capture this property based on mutual information between complementary subsets of a system. Neural complexity, so defined, is one of a family of information theoretic metrics developed to measure the balance between the segregation and integration of a system’s dynamics. One key question arising for such measures involves understanding how they are influenced by network topology. Sporns [Cereb. Cortex53OPAV1047-321110.1093/cercor/10.2.127 10, 127 (2000)] employed numerical models in order to determine the dependence of neural complexity on the topological features of a network. However, a complete picture has yet to be established. While De Lucia [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.71.016114 71, 016114 (2005)] made the first attempts at an analytical account of this relationship, their work utilized a formulation of neural complexity that, we argue, did not reflect the intuitions of the original work. In this paper we start by describing weighted connection matrices formed by applying a random continuous weight distribution to binary adjacency matrices. This allows us to derive an approximation for neural complexity in terms of the moments of the weight distribution and elementary graph motifs. In particular, we explicitly establish a dependency of neural complexity on cyclic graph motifs.

  13. Artificial Neural Networks and the Mass Appraisal of Real Estate

    Directory of Open Access Journals (Sweden)

    Gang Zhou

    2018-03-01

    Full Text Available With the rapid development of computer, artificial intelligence and big data technology, artificial neural networks have become one of the most powerful machine learning algorithms. In the practice, most of the applications of artificial neural networks use back propagation neural network and its variation. Besides the back propagation neural network, various neural networks have been developing in order to improve the performance of standard models. Though neural networks are well known method in the research of real estate, there is enormous space for future research in order to enhance their function. Some scholars combine genetic algorithm, geospatial information, support vector machine model, particle swarm optimization with artificial neural networks to appraise the real estate, which is helpful for the existing appraisal technology. The mass appraisal of real estate in this paper includes the real estate valuation in the transaction and the tax base valuation in the real estate holding. In this study we focus on the theoretical development of artificial neural networks and mass appraisal of real estate, artificial neural networks model evolution and algorithm improvement, artificial neural networks practice and application, and review the existing literature about artificial neural networks and mass appraisal of real estate. Finally, we provide some suggestions for the mass appraisal of China's real estate.

  14. A theory of how active behavior stabilises neural activity: Neural gain modulation by closed-loop environmental feedback.

    Directory of Open Access Journals (Sweden)

    Christopher L Buckley

    2018-01-01

    Full Text Available During active behaviours like running, swimming, whisking or sniffing, motor actions shape sensory input and sensory percepts guide future motor commands. Ongoing cycles of sensory and motor processing constitute a closed-loop feedback system which is central to motor control and, it has been argued, for perceptual processes. This closed-loop feedback is mediated by brainwide neural circuits but how the presence of feedback signals impacts on the dynamics and function of neurons is not well understood. Here we present a simple theory suggesting that closed-loop feedback between the brain/body/environment can modulate neural gain and, consequently, change endogenous neural fluctuations and responses to sensory input. We support this theory with modeling and data analysis in two vertebrate systems. First, in a model of rodent whisking we show that negative feedback mediated by whisking vibrissa can suppress coherent neural fluctuations and neural responses to sensory input in the barrel cortex. We argue this suppression provides an appealing account of a brain state transition (a marked change in global brain activity coincident with the onset of whisking in rodents. Moreover, this mechanism suggests a novel signal detection mechanism that selectively accentuates active, rather than passive, whisker touch signals. This mechanism is consistent with a predictive coding strategy that is sensitive to the consequences of motor actions rather than the difference between the predicted and actual sensory input. We further support the theory by re-analysing previously published two-photon data recorded in zebrafish larvae performing closed-loop optomotor behaviour in a virtual swim simulator. We show, as predicted by this theory, that the degree to which each cell contributes in linking sensory and motor signals well explains how much its neural fluctuations are suppressed by closed-loop optomotor behaviour. More generally we argue that our results

  15. A theory of how active behavior stabilises neural activity: Neural gain modulation by closed-loop environmental feedback.

    Science.gov (United States)

    Buckley, Christopher L; Toyoizumi, Taro

    2018-01-01

    During active behaviours like running, swimming, whisking or sniffing, motor actions shape sensory input and sensory percepts guide future motor commands. Ongoing cycles of sensory and motor processing constitute a closed-loop feedback system which is central to motor control and, it has been argued, for perceptual processes. This closed-loop feedback is mediated by brainwide neural circuits but how the presence of feedback signals impacts on the dynamics and function of neurons is not well understood. Here we present a simple theory suggesting that closed-loop feedback between the brain/body/environment can modulate neural gain and, consequently, change endogenous neural fluctuations and responses to sensory input. We support this theory with modeling and data analysis in two vertebrate systems. First, in a model of rodent whisking we show that negative feedback mediated by whisking vibrissa can suppress coherent neural fluctuations and neural responses to sensory input in the barrel cortex. We argue this suppression provides an appealing account of a brain state transition (a marked change in global brain activity) coincident with the onset of whisking in rodents. Moreover, this mechanism suggests a novel signal detection mechanism that selectively accentuates active, rather than passive, whisker touch signals. This mechanism is consistent with a predictive coding strategy that is sensitive to the consequences of motor actions rather than the difference between the predicted and actual sensory input. We further support the theory by re-analysing previously published two-photon data recorded in zebrafish larvae performing closed-loop optomotor behaviour in a virtual swim simulator. We show, as predicted by this theory, that the degree to which each cell contributes in linking sensory and motor signals well explains how much its neural fluctuations are suppressed by closed-loop optomotor behaviour. More generally we argue that our results demonstrate the dependence

  16. Novel paths towards neural cellular products for neurological disorders.

    Science.gov (United States)

    Daadi, Marcel M

    2011-11-01

    The prospect of using neural cells derived from stem cells or from reprogrammed adult somatic cells provides a unique opportunity in cell therapy and drug discovery for developing novel strategies for brain repair. Cell-based therapeutic approaches for treating CNS afflictions caused by disease or injury aim to promote structural repair of the injured or diseased neural tissue, an outcome currently not achieved by drug therapy. Preclinical research in animal models of various diseases or injuries report that grafts of neural cells enhance endogenous repair, provide neurotrophic support to neurons undergoing degeneration and replace lost neural cells. In recent years, the sources of neural cells for treating neurological disorders have been rapidly expanding and in addition to offering therapeutic potential, neural cell products hold promise for disease modeling and drug discovery use. Specific neural cell types have been derived from adult or fetal brain, from human embryonic stem cells, from induced pluripotent stem cells and directly transdifferentiated from adult somatic cells, such as skin cells. It is yet to be determined if the latter approach will evolve into a paradigm shift in the fields of stem cell research and regenerative medicine. These multiple sources of neural cells cover a wide spectrum of safety that needs to be balanced with efficacy to determine the viability of the cellular product. In this article, we will review novel sources of neural cells and discuss current obstacles to developing them into viable cellular products for treating neurological disorders.

  17. Dlx proteins position the neural plate border and determine adjacent cell fates.

    Science.gov (United States)

    Woda, Juliana M; Pastagia, Julie; Mercola, Mark; Artinger, Kristin Bruk

    2003-01-01

    The lateral border of the neural plate is a major source of signals that induce primary neurons, neural crest cells and cranial placodes as well as provide patterning cues to mesodermal structures such as somites and heart. Whereas secreted BMP, FGF and Wnt proteins influence the differentiation of neural and non-neural ectoderm, we show here that members of the Dlx family of transcription factors position the border between neural and non-neural ectoderm and are required for the specification of adjacent cell fates. Inhibition of endogenous Dlx activity in Xenopus embryos with an EnR-Dlx homeodomain fusion protein expands the neural plate into non-neural ectoderm tissue whereas ectopic activation of Dlx target genes inhibits neural plate differentiation. Importantly, the stereotypic pattern of border cell fates in the adjacent ectoderm is re-established only under conditions where the expanded neural plate abuts Dlx-positive non-neural ectoderm. Experiments in which presumptive neural plate was grafted to ventral ectoderm reiterate induction of neural crest and placodal lineages and also demonstrate that Dlx activity is required in non-neural ectoderm for the production of signals needed for induction of these cells. We propose that Dlx proteins regulate intercellular signaling across the interface between neural and non-neural ectoderm that is critical for inducing and patterning adjacent cell fates.

  18. Neural reactivation links unconscious thought to decision-making performance.

    Science.gov (United States)

    Creswell, John David; Bursley, James K; Satpute, Ajay B

    2013-12-01

    Brief periods of unconscious thought (UT) have been shown to improve decision making compared with making an immediate decision (ID). We reveal a neural mechanism for UT in decision making using blood oxygen level-dependent (BOLD) functional magnetic resonance imaging. Participants (N = 33) encoded information on a set of consumer products (e.g. 48 attributes describing four different cars), and we manipulated whether participants (i) consciously thought about this information (conscious thought), (ii) completed a difficult 2-back working memory task (UT) or (iii) made an immediate decision about the consumer products (ID) in a within-subjects blocked design. To differentiate UT neural activity from 2-back working memory neural activity, participants completed an independent 2-back task and this neural activity was subtracted from neural activity occurring during the UT 2-back task. Consistent with a neural reactivation account, we found that the same regions activated during the encoding of complex decision information (right dorsolateral prefrontal cortex and left intermediate visual cortex) continued to be activated during a subsequent 2-min UT period. Moreover, neural reactivation in these regions was predictive of subsequent behavioral decision-making performance after the UT period. These results provide initial evidence for post-encoding unconscious neural reactivation in facilitating decision making.

  19. Modeling Broadband Microwave Structures by Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Otevrel

    2004-06-01

    Full Text Available The paper describes the exploitation of feed-forward neural networksand recurrent neural networks for replacing full-wave numerical modelsof microwave structures in complex microwave design tools. Building aneural model, attention is turned to the modeling accuracy and to theefficiency of building a model. Dealing with the accuracy, we describea method of increasing it by successive completing a training set.Neural models are mutually compared in order to highlight theiradvantages and disadvantages. As a reference model for comparisons,approximations based on standard cubic splines are used. Neural modelsare used to replace both the time-domain numeric models and thefrequency-domain ones.

  20. Neural and Fuzzy Adaptive Control of Induction Motor Drives

    International Nuclear Information System (INIS)

    Bensalem, Y.; Sbita, L.; Abdelkrim, M. N.

    2008-01-01

    This paper proposes an adaptive neural network speed control scheme for an induction motor (IM) drive. The proposed scheme consists of an adaptive neural network identifier (ANNI) and an adaptive neural network controller (ANNC). For learning the quoted neural networks, a back propagation algorithm was used to automatically adjust the weights of the ANNI and ANNC in order to minimize the performance functions. Here, the ANNI can quickly estimate the plant parameters and the ANNC is used to provide on-line identification of the command and to produce a control force, such that the motor speed can accurately track the reference command. By combining artificial neural network techniques with fuzzy logic concept, a neural and fuzzy adaptive control scheme is developed. Fuzzy logic was used for the adaptation of the neural controller to improve the robustness of the generated command. The developed method is robust to load torque disturbance and the speed target variations when it ensures precise trajectory tracking with the prescribed dynamics. The algorithm was verified by simulation and the results obtained demonstrate the effectiveness of the IM designed controller

  1. Quantum neural networks: Current status and prospects for development

    Science.gov (United States)

    Altaisky, M. V.; Kaputkina, N. E.; Krylov, V. A.

    2014-11-01

    The idea of quantum artificial neural networks, first formulated in [34], unites the artificial neural network concept with the quantum computation paradigm. Quantum artificial neural networks were first systematically considered in the PhD thesis by T. Menneer (1998). Based on the works of Menneer and Narayanan [42, 43], Kouda, Matsui, and Nishimura [35, 36], Altaisky [2, 68], Zhou [67], and others, quantum-inspired learning algorithms for neural networks were developed, and are now used in various training programs and computer games [29, 30]. The first practically realizable scaled hardware-implemented model of the quantum artificial neural network is obtained by D-Wave Systems, Inc. [33]. It is a quantum Hopfield network implemented on the basis of superconducting quantum interference devices (SQUIDs). In this work we analyze possibilities and underlying principles of an alternative way to implement quantum neural networks on the basis of quantum dots. A possibility of using quantum neural network algorithms in automated control systems, associative memory devices, and in modeling biological and social networks is examined.

  2. Mass reconstruction with a neural network

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Roegnvaldsson, T.

    1992-01-01

    A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W→qanti q, where W-bosons are produced in panti p reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using 'intelligent' variables in instances when the amount of training instances is limited. (orig.)

  3. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  4. Identifying Emotions on the Basis of Neural Activation.

    Science.gov (United States)

    Kassam, Karim S; Markey, Amanda R; Cherkassky, Vladimir L; Loewenstein, George; Just, Marcel Adam

    2013-01-01

    We attempt to determine the discriminability and organization of neural activation corresponding to the experience of specific emotions. Method actors were asked to self-induce nine emotional states (anger, disgust, envy, fear, happiness, lust, pride, sadness, and shame) while in an fMRI scanner. Using a Gaussian Naïve Bayes pooled variance classifier, we demonstrate the ability to identify specific emotions experienced by an individual at well over chance accuracy on the basis of: 1) neural activation of the same individual in other trials, 2) neural activation of other individuals who experienced similar trials, and 3) neural activation of the same individual to a qualitatively different type of emotion induction. Factor analysis identified valence, arousal, sociality, and lust as dimensions underlying the activation patterns. These results suggest a structure for neural representations of emotion and inform theories of emotional processing.

  5. Introduction to neural networks with electric power applications

    International Nuclear Information System (INIS)

    Wildberger, A.M.; Hickok, K.A.

    1990-01-01

    This is an introduction to the general field of neural networks with emphasis on prospects for their application in the power industry. It is intended to provide enough background information for its audience to begin to follow technical developments in neural networks and to recognize those which might impact on electric power engineering. Beginning with a brief discussion of natural and artificial neurons, the characteristics of neural networks in general and how they learn, neural networks are compared with other modeling tools such as simulation and expert systems in order to provide guidance in selecting appropriate applications. In the power industry, possible applications include plant control, dispatching, and maintenance scheduling. In particular, neural networks are currently being investigated for enhancements to the Thermal Performance Advisor (TPA) which General Physics Corporation (GP) has developed to improve the efficiency of electric power generation

  6. A Neural Network-Based Interval Pattern Matcher

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2015-07-01

    Full Text Available One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches 100% and that is promising.

  7. The challenges of neural mind-reading paradigms

    OpenAIRE

    Vilarroya, Oscar

    2013-01-01

    Neural mind-reading studies, based on multivariate pattern analysis (MVPA) methods, are providing exciting new studies. Some of the results obtained with these paradigms have raised high expectations, such as the possibility of creating brain reading devices. However, such hopes are based on the assumptions that: (a) the BOLD signal is a marker of neural activity; (b) the BOLD pattern identified by a MVPA is a neurally sound pattern; (c) the MVPA's feature space is a good mapping of the neura...

  8. Race modulates neural activity during imitation

    Science.gov (United States)

    Losin, Elizabeth A. Reynolds; Iacoboni, Marco; Martin, Alia; Cross, Katy A.; Dapretto, Mirella

    2014-01-01

    Imitation plays a central role in the acquisition of culture. People preferentially imitate others who are self-similar, prestigious or successful. Because race can indicate a person's self-similarity or status, race influences whom people imitate. Prior studies of the neural underpinnings of imitation have not considered the effects of race. Here we measured neural activity with fMRI while European American participants imitated meaningless gestures performed by actors of their own race, and two racial outgroups, African American, and Chinese American. Participants also passively observed the actions of these actors and their portraits. Frontal, parietal and occipital areas were differentially activated while participants imitated actors of different races. More activity was present when imitating African Americans than the other racial groups, perhaps reflecting participants' reported lack of experience with and negative attitudes towards this group, or the group's lower perceived social status. This pattern of neural activity was not found when participants passively observed the gestures of the actors or simply looked at their faces. Instead, during face-viewing neural responses were overall greater for own-race individuals, consistent with prior race perception studies not involving imitation. Our findings represent a first step in elucidating neural mechanisms involved in cultural learning, a process that influences almost every aspect of our lives but has thus far received little neuroscientific study. PMID:22062193

  9. Machine Learning Topological Invariants with Neural Networks

    Science.gov (United States)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  10. NEURAL NETWORKS FOR STOCK MARKET OPTION PRICING

    Directory of Open Access Journals (Sweden)

    Sergey A. Sannikov

    2017-03-01

    Full Text Available Introduction: The use of neural networks for non-linear models helps to understand where linear model drawbacks, coused by their specification, reveal themselves. This paper attempts to find this out. The objective of research is to determine the meaning of “option prices calculation using neural networks”. Materials and Methods: We use two kinds of variables: endogenous (variables included in the model of neural network and variables affecting on the model (permanent disturbance. Results: All data are divided into 3 sets: learning, affirming and testing. All selected variables are normalised from 0 to 1. Extreme values of income were shortcut. Discussion and Conclusions: Using the 33-14-1 neural network with direct links we obtained two sets of forecasts. Optimal criteria of strategies in stock markets’ option pricing were developed.

  11. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  12. Tensor Basis Neural Network v. 1.0 (beta)

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-28

    This software package can be used to build, train, and test a neural network machine learning model. The neural network architecture is specifically designed to embed tensor invariance properties by enforcing that the model predictions sit on an invariant tensor basis. This neural network architecture can be used in developing constitutive models for applications such as turbulence modeling, materials science, and electromagnetism.

  13. Collaborative Recurrent Neural Networks forDynamic Recommender Systems

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:366–381, 2016 ACML 2016 Collaborative Recurrent Neural Networks for Dynamic Recommender Systems Young...an unprece- dented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating...Recurrent Neural Network, Recommender System , Neural Language Model, Collaborative Filtering 1. Introduction As ever larger parts of the population

  14. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  15. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  16. Neural Network Algorithm for Particle Loading

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given

  17. Memory in Neural Networks and Glasses

    NARCIS (Netherlands)

    Heerema, M.

    2000-01-01

    The thesis tries and models a neural network in a way which, at essential points, is biologically realistic. In a biological context, the changes of the synapses of the neural network are most often described by what is called `Hebb's learning rule'. On careful analysis it is, in fact, nothing but a

  18. Introduction to spiking neural networks: Information processing, learning and applications.

    Science.gov (United States)

    Ponulak, Filip; Kasinski, Andrzej

    2011-01-01

    The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.

  19. DNA methyltransferase 3b is dispensable for mouse neural crest development.

    Directory of Open Access Journals (Sweden)

    Bridget T Jacques-Fricke

    Full Text Available The neural crest is a population of multipotent cells that migrates extensively throughout vertebrate embryos to form diverse structures. Mice mutant for the de novo DNA methyltransferase DNMT3b exhibit defects in two neural crest derivatives, the craniofacial skeleton and cardiac ventricular septum, suggesting that DNMT3b activity is necessary for neural crest development. Nevertheless, the requirement for DNMT3b specifically in neural crest cells, as opposed to interacting cell types, has not been determined. Using a conditional DNMT3b allele crossed to the neural crest cre drivers Wnt1-cre and Sox10-cre, neural crest DNMT3b mutants were generated. In both neural crest-specific and fully DNMT3b-mutant embryos, cranial neural crest cells exhibited only subtle migration defects, with increased numbers of dispersed cells trailing organized streams in the head. In spite of this, the resulting cranial ganglia, craniofacial skeleton, and heart developed normally when neural crest cells lacked DNMT3b. This indicates that DNTM3b is not necessary in cranial neural crest cells for their development. We conclude that defects in neural crest derivatives in DNMT3b mutant mice reflect a requirement for DNMT3b in lineages such as the branchial arch mesendoderm or the cardiac mesoderm that interact with neural crest cells during formation of these structures.

  20. Complex-valued neural networks advances and applications

    CERN Document Server

    Hirose, Akira

    2013-01-01

    Presents the latest advances in complex-valued neural networks by demonstrating the theory in a wide range of applications Complex-valued neural networks is a rapidly developing neural network framework that utilizes complex arithmetic, exhibiting specific characteristics in its learning, self-organizing, and processing dynamics. They are highly suitable for processing complex amplitude, composed of amplitude and phase, which is one of the core concepts in physical systems to deal with electromagnetic, light, sonic/ultrasonic waves as well as quantum waves, namely, electron and

  1. Cognitive deficits caused by prefrontal cortical and hippocampal neural disinhibition.

    Science.gov (United States)

    Bast, Tobias; Pezze, Marie; McGarrity, Stephanie

    2017-10-01

    We review recent evidence concerning the significance of inhibitory GABA transmission and of neural disinhibition, that is, deficient GABA transmission, within the prefrontal cortex and the hippocampus, for clinically relevant cognitive functions. Both regions support important cognitive functions, including attention and memory, and their dysfunction has been implicated in cognitive deficits characterizing neuropsychiatric disorders. GABAergic inhibition shapes cortico-hippocampal neural activity, and, recently, prefrontal and hippocampal neural disinhibition has emerged as a pathophysiological feature of major neuropsychiatric disorders, especially schizophrenia and age-related cognitive decline. Regional neural disinhibition, disrupting spatio-temporal control of neural activity and causing aberrant drive of projections, may disrupt processing within the disinhibited region and efferent regions. Recent studies in rats showed that prefrontal and hippocampal neural disinhibition (by local GABA antagonist microinfusion) dysregulates burst firing, which has been associated with important aspects of neural information processing. Using translational tests of clinically relevant cognitive functions, these studies showed that prefrontal and hippocampal neural disinhibition disrupts regional cognitive functions (including prefrontal attention and hippocampal memory function). Moreover, hippocampal neural disinhibition disrupted attentional performance, which does not require the hippocampus but requires prefrontal-striatal circuits modulated by the hippocampus. However, some prefrontal and hippocampal functions (including inhibitory response control) are spared by regional disinhibition. We consider conceptual implications of these findings, regarding the distinct relationships of distinct cognitive functions to prefrontal and hippocampal GABA tone and neural activity. Moreover, the findings support the proposition that prefrontal and hippocampal neural disinhibition

  2. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    Science.gov (United States)

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  3. A TLD dose algorithm using artificial neural networks

    International Nuclear Information System (INIS)

    Moscovitch, M.; Rotunda, J.E.; Tawil, R.A.; Rathbone, B.A.

    1995-01-01

    An artificial neural network was designed and used to develop a dose algorithm for a multi-element thermoluminescence dosimeter (TLD). The neural network architecture is based on the concept of functional links network (FLN). Neural network is an information processing method inspired by the biological nervous system. A dose algorithm based on neural networks is fundamentally different as compared to conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with given responses of a multi-element dosimeter (input) many times. The algorithm, being trained that way, eventually is capable to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personal dosimetry, the output consists of the desired dose components: deep dose, shallow dose and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. The neural network approach was applied to the Harshaw Type 8825 TLD, and was shown to significantly improve the performance of this dosimeter, well within the U.S. accreditation requirements for personnel dosimeters

  4. Financial time series prediction using spiking neural networks.

    Science.gov (United States)

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  5. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  6. Pax7 lineage contributions to the mammalian neural crest.

    Directory of Open Access Journals (Sweden)

    Barbara Murdoch

    Full Text Available Neural crest cells are vertebrate-specific multipotent cells that contribute to a variety of tissues including the peripheral nervous system, melanocytes, and craniofacial bones and cartilage. Abnormal development of the neural crest is associated with several human maladies including cleft/lip palate, aggressive cancers such as melanoma and neuroblastoma, and rare syndromes, like Waardenburg syndrome, a complex disorder involving hearing loss and pigment defects. We previously identified the transcription factor Pax7 as an early marker, and required component for neural crest development in chick embryos. In mammals, Pax7 is also thought to play a role in neural crest development, yet the precise contribution of Pax7 progenitors to the neural crest lineage has not been determined.Here we use Cre/loxP technology in double transgenic mice to fate map the Pax7 lineage in neural crest derivates. We find that Pax7 descendants contribute to multiple tissues including the cranial, cardiac and trunk neural crest, which in the cranial cartilage form a distinct regional pattern. The Pax7 lineage, like the Pax3 lineage, is additionally detected in some non-neural crest tissues, including a subset of the epithelial cells in specific organs.These results demonstrate a previously unappreciated widespread distribution of Pax7 descendants within and beyond the neural crest. They shed light regarding the regionally distinct phenotypes observed in Pax3 and Pax7 mutants, and provide a unique perspective into the potential roles of Pax7 during disease and development.

  7. Identifying Emotions on the Basis of Neural Activation.

    Directory of Open Access Journals (Sweden)

    Karim S Kassam

    Full Text Available We attempt to determine the discriminability and organization of neural activation corresponding to the experience of specific emotions. Method actors were asked to self-induce nine emotional states (anger, disgust, envy, fear, happiness, lust, pride, sadness, and shame while in an fMRI scanner. Using a Gaussian Naïve Bayes pooled variance classifier, we demonstrate the ability to identify specific emotions experienced by an individual at well over chance accuracy on the basis of: 1 neural activation of the same individual in other trials, 2 neural activation of other individuals who experienced similar trials, and 3 neural activation of the same individual to a qualitatively different type of emotion induction. Factor analysis identified valence, arousal, sociality, and lust as dimensions underlying the activation patterns. These results suggest a structure for neural representations of emotion and inform theories of emotional processing.

  8. Thermoelastic steam turbine rotor control based on neural network

    Science.gov (United States)

    Rzadkowski, Romuald; Dominiczak, Krzysztof; Radulski, Wojciech; Szczepanik, R.

    2015-12-01

    Considered here are Nonlinear Auto-Regressive neural networks with eXogenous inputs (NARX) as a mathematical model of a steam turbine rotor for controlling steam turbine stress on-line. In order to obtain neural networks that locate critical stress and temperature points in the steam turbine during transient states, an FE rotor model was built. This model was used to train the neural networks on the basis of steam turbine transient operating data. The training included nonlinearity related to steam turbine expansion, heat exchange and rotor material properties during transients. Simultaneous neural networks are algorithms which can be implemented on PLC controllers. This allows for the application neural networks to control steam turbine stress in industrial power plants.

  9. Short-term synaptic plasticity and heterogeneity in neural systems

    Science.gov (United States)

    Mejias, J. F.; Kappen, H. J.; Longtin, A.; Torres, J. J.

    2013-01-01

    We review some recent results on neural dynamics and information processing which arise when considering several biophysical factors of interest, in particular, short-term synaptic plasticity and neural heterogeneity. The inclusion of short-term synaptic plasticity leads to enhanced long-term memory capacities, a higher robustness of memory to noise, and irregularity in the duration of the so-called up cortical states. On the other hand, considering some level of neural heterogeneity in neuron models allows neural systems to optimize information transmission in rate coding and temporal coding, two strategies commonly used by neurons to codify information in many brain areas. In all these studies, analytical approximations can be made to explain the underlying dynamics of these neural systems.

  10. Arabic Handwriting Recognition Using Neural Network Classifier

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... an OCR using Neural Network classifier preceded by a set of preprocessing .... Artificial Neural Networks (ANNs), which we adopt in this research, consist of ... advantage and disadvantages of each technique. In [9],. Khemiri ...

  11. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  12. Artificial Neural Networks For Hadron Hadron Cross-sections

    International Nuclear Information System (INIS)

    ELMashad, M.; ELBakry, M.Y.; Tantawy, M.; Habashy, D.M.

    2011-01-01

    In recent years artificial neural networks (ANN ) have emerged as a mature and viable framework with many applications in various areas. Artificial neural networks theory is sometimes used to refer to a branch of computational science that uses neural networks as models to either simulate or analyze complex phenomena and/or study the principles of operation of neural networks analytically. In this work a model of hadron- hadron collision using the ANN technique is present, the hadron- hadron based ANN model calculates the cross sections of hadron- hadron collision. The results amply demonstrate the feasibility of such new technique in extracting the collision features and prove its effectiveness

  13. Towards a magnetoresistive platform for neural signal recording

    Science.gov (United States)

    Sharma, P. P.; Gervasoni, G.; Albisetti, E.; D'Ercoli, F.; Monticelli, M.; Moretti, D.; Forte, N.; Rocchi, A.; Ferrari, G.; Baldelli, P.; Sampietro, M.; Benfenati, F.; Bertacco, R.; Petti, D.

    2017-05-01

    A promising strategy to get deeper insight on brain functionalities relies on the investigation of neural activities at the cellular and sub-cellular level. In this framework, methods for recording neuron electrical activity have gained interest over the years. Main technological challenges are associated to finding highly sensitive detection schemes, providing considerable spatial and temporal resolution. Moreover, the possibility to perform non-invasive assays would constitute a noteworthy benefit. In this work, we present a magnetoresistive platform for the detection of the action potential propagation in neural cells. Such platform allows, in perspective, the in vitro recording of neural signals arising from single neurons, neural networks and brain slices.

  14. Analysis of neural networks in terms of domain functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a

  15. Neural networks in continuous optical media

    International Nuclear Information System (INIS)

    Anderson, D.Z.

    1987-01-01

    The authors' interest is to see to what extent neural models can be implemented using continuous optical elements. Thus these optical networks represent a continuous distribution of neuronlike processors rather than a discrete collection. Most neural models have three characteristic features: interconnections; adaptivity; and nonlinearity. In their optical representation the interconnections are implemented with linear one- and two-port optical elements such as lenses and holograms. Real-time holographic media allow these interconnections to become adaptive. The nonlinearity is achieved with gain, for example, from two-beam coupling in photorefractive media or a pumped dye medium. Using these basic optical elements one can in principle construct continuous representations of a number of neural network models. The authors demonstrated two devices based on continuous optical elements: an associative memory which recalls an entire object when addressed with a partial object and a tracking novelty filter which identifies time-dependent features in an optical scene. These devices demonstrate the potential of distributed optical elements to implement more formal models of neural networks

  16. A fully implantable rodent neural stimulator

    Science.gov (United States)

    Perry, D. W. J.; Grayden, D. B.; Shepherd, R. K.; Fallon, J. B.

    2012-02-01

    The ability to electrically stimulate neural and other excitable tissues in behaving experimental animals is invaluable for both the development of neural prostheses and basic neurological research. We developed a fully implantable neural stimulator that is able to deliver two channels of intra-cochlear electrical stimulation in the rat. It is powered via a novel omni-directional inductive link and includes an on-board microcontroller with integrated radio link, programmable current sources and switching circuitry to generate charge-balanced biphasic stimulation. We tested the implant in vivo and were able to elicit both neural and behavioural responses. The implants continued to function for up to five months in vivo. While targeted to cochlear stimulation, with appropriate electrode arrays the stimulator is well suited to stimulating other neurons within the peripheral or central nervous systems. Moreover, it includes significant on-board data acquisition and processing capabilities, which could potentially make it a useful platform for telemetry applications, where there is a need to chronically monitor physiological variables in unrestrained animals.

  17. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  18. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the neural network attractive. A neural network is an information processing system modeled on the structure of the dynamic process. It can solve the complex/nonlinear problems quickly once trained by operating on problems using an interconnected number...

  20. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  1. A wirelessly powered microspectrometer for neural probe-pin device

    Science.gov (United States)

    Choi, Sang H.; Kim, Min H.; Song, Kyo D.; Yoon, Hargsoon; Lee, Uhn

    2015-12-01

    Treatment of neurological anomalies, whether done invasively or not, places stringent demands on device functionality and size. We have developed a micro-spectrometer for use as an implantable neural probe to monitor neuro-chemistry in synapses. The micro-spectrometer, based on a NASA-invented miniature Fresnel grating, is capable of differentiating the emission spectra from various brain tissues. The micro-spectrometer meets the size requirements, and is able to probe the neuro-chemistry and suppression voltage typically associated with a neural anomaly. This neural probe-pin device (PPD) is equipped with wireless power technology (WPT) to enable operation in a continuous manner without requiring an implanted battery. The implanted neural PPD, together with a neural electronics interface and WPT, enable real-time measurement and control/feedback for remediation of neural anomalies. The design and performance of the combined PPD/WPT device for monitoring dopamine in a rat brain will be presented to demonstrate the current level of development. Future work on this device will involve the addition of an embedded expert system capable of performing semi-autonomous management of neural functions through a routine of sensing, processing, and control.

  2. Neural network models of categorical perception.

    Science.gov (United States)

    Damper, R I; Harnad, S R

    2000-05-01

    Studies of the categorical perception (CP) of sensory continua have a long and rich history in psychophysics. In 1977, Macmillan, Kaplan, and Creelman introduced the use of signal detection theory to CP studies. Anderson and colleagues simultaneously proposed the first neural model for CP, yet this line of research has been less well explored. In this paper, we assess the ability of neural-network models of CP to predict the psychophysical performance of real observers with speech sounds and artificial/novel stimuli. We show that a variety of neural mechanisms are capable of generating the characteristics of CP. Hence, CP may not be a special model of perception but an emergent property of any sufficiently powerful general learning system.

  3. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  4. Estimation of neural energy in microelectrode signals

    Science.gov (United States)

    Gaumond, R. P.; Clement, R.; Silva, R.; Sander, D.

    2004-09-01

    We considered the problem of determining the neural contribution to the signal recorded by an intracortical electrode. We developed a linear least-squares approach to determine the energy fraction of a signal attributable to an arbitrary number of autocorrelation-defined signals buried in noise. Application of the method requires estimation of autocorrelation functions Rap(tgr) characterizing the action potential (AP) waveforms and Rn(tgr) characterizing background noise. This method was applied to the analysis of chronically implanted microelectrode signals from motor cortex of rat. We found that neural (AP) energy consisted of a large-signal component which grows linearly with the number of threshold-detected neural events and a small-signal component unrelated to the count of threshold-detected AP signals. The addition of pseudorandom noise to electrode signals demonstrated the algorithm's effectiveness for a wide range of noise-to-signal energy ratios (0.08 to 39). We suggest, therefore, that the method could be of use in providing a measure of neural response in situations where clearly identified spike waveforms cannot be isolated, or in providing an additional 'background' measure of microelectrode neural activity to supplement the traditional AP spike count.

  5. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  6. Comparison of 2D and 3D neural induction methods for the generation of neural progenitor cells from human induced pluripotent stem cells.

    Science.gov (United States)

    Chandrasekaran, Abinaya; Avci, Hasan X; Ochalek, Anna; Rösingh, Lone N; Molnár, Kinga; László, Lajos; Bellák, Tamás; Téglási, Annamária; Pesti, Krisztina; Mike, Arpad; Phanthong, Phetcharat; Bíró, Orsolya; Hall, Vanessa; Kitiyanant, Narisorn; Krause, Karl-Heinz; Kobolák, Julianna; Dinnyés, András

    2017-12-01

    Neural progenitor cells (NPCs) from human induced pluripotent stem cells (hiPSCs) are frequently induced using 3D culture methodologies however, it is unknown whether spheroid-based (3D) neural induction is actually superior to monolayer (2D) neural induction. Our aim was to compare the efficiency of 2D induction with 3D induction method in their ability to generate NPCs, and subsequently neurons and astrocytes. Neural differentiation was analysed at the protein level qualitatively by immunocytochemistry and quantitatively by flow cytometry for NPC (SOX1, PAX6, NESTIN), neuronal (MAP2, TUBB3), cortical layer (TBR1, CUX1) and glial markers (SOX9, GFAP, AQP4). Electron microscopy demonstrated that both methods resulted in morphologically similar neural rosettes. However, quantification of NPCs derived from 3D neural induction exhibited an increase in the number of PAX6/NESTIN double positive cells and the derived neurons exhibited longer neurites. In contrast, 2D neural induction resulted in more SOX1 positive cells. While 2D monolayer induction resulted in slightly less mature neurons, at an early stage of differentiation, the patch clamp analysis failed to reveal any significant differences between the electrophysiological properties between the two induction methods. In conclusion, 3D neural induction increases the yield of PAX6 + /NESTIN + cells and gives rise to neurons with longer neurites, which might be an advantage for the production of forebrain cortical neurons, highlighting the potential of 3D neural induction, independent of iPSCs' genetic background. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. A Possible Neural Representation of Mathematical Group Structures.

    Science.gov (United States)

    Pomi, Andrés

    2016-09-01

    Every cognitive activity has a neural representation in the brain. When humans deal with abstract mathematical structures, for instance finite groups, certain patterns of activity are occurring in the brain that constitute their neural representation. A formal neurocognitive theory must account for all the activities developed by our brain and provide a possible neural representation for them. Associative memories are neural network models that have a good chance of achieving a universal representation of cognitive phenomena. In this work, we present a possible neural representation of mathematical group structures based on associative memory models that store finite groups through their Cayley graphs. A context-dependent associative memory stores the transitions between elements of the group when multiplied by each generator of a given presentation of the group. Under a convenient election of the vector basis mapping the elements of the group in the neural activity, the input of a vector corresponding to a generator of the group collapses the context-dependent rectangular matrix into a virtual square permutation matrix that is the matrix representation of the generator. This neural representation corresponds to the regular representation of the group, in which to each element is assigned a permutation matrix. This action of the generator on the memory matrix can also be seen as the dissection of the corresponding monochromatic subgraph of the Cayley graph of the group, and the adjacency matrix of this subgraph is the permutation matrix corresponding to the generator.

  8. Neural plasticity and its initiating conditions in tinnitus.

    Science.gov (United States)

    Roberts, L E

    2018-03-01

    Deafferentation caused by cochlear pathology (which can be hidden from the audiogram) activates forms of neural plasticity in auditory pathways, generating tinnitus and its associated conditions including hyperacusis. This article discusses tinnitus mechanisms and suggests how these mechanisms may relate to those involved in normal auditory information processing. Research findings from animal models of tinnitus and from electromagnetic imaging of tinnitus patients are reviewed which pertain to the role of deafferentation and neural plasticity in tinnitus and hyperacusis. Auditory neurons compensate for deafferentation by increasing their input/output functions (gain) at multiple levels of the auditory system. Forms of homeostatic plasticity are believed to be responsible for this neural change, which increases the spontaneous and driven activity of neurons in central auditory structures in animals expressing behavioral evidence of tinnitus. Another tinnitus correlate, increased neural synchrony among the affected neurons, is forged by spike-timing-dependent neural plasticity in auditory pathways. Slow oscillations generated by bursting thalamic neurons verified in tinnitus animals appear to modulate neural plasticity in the cortex, integrating tinnitus neural activity with information in brain regions supporting memory, emotion, and consciousness which exhibit increased metabolic activity in tinnitus patients. The latter process may be induced by transient auditory events in normal processing but it persists in tinnitus, driven by phantom signals from the auditory pathway. Several tinnitus therapies attempt to suppress tinnitus through plasticity, but repeated sessions will likely be needed to prevent tinnitus activity from returning owing to deafferentation as its initiating condition.

  9. Spiking Neural P Systems with Communication on Request.

    Science.gov (United States)

    Pan, Linqiang; Păun, Gheorghe; Zhang, Gexiang; Neri, Ferrante

    2017-12-01

    Spiking Neural [Formula: see text] Systems are Neural System models characterized by the fact that each neuron mimics a biological cell and the communication between neurons is based on spikes. In the Spiking Neural [Formula: see text] systems investigated so far, the application of evolution rules depends on the contents of a neuron (checked by means of a regular expression). In these [Formula: see text] systems, a specified number of spikes are consumed and a specified number of spikes are produced, and then sent to each of the neurons linked by a synapse to the evolving neuron. [Formula: see text]In the present work, a novel communication strategy among neurons of Spiking Neural [Formula: see text] Systems is proposed. In the resulting models, called Spiking Neural [Formula: see text] Systems with Communication on Request, the spikes are requested from neighboring neurons, depending on the contents of the neuron (still checked by means of a regular expression). Unlike the traditional Spiking Neural [Formula: see text] systems, no spikes are consumed or created: the spikes are only moved along synapses and replicated (when two or more neurons request the contents of the same neuron). [Formula: see text]The Spiking Neural [Formula: see text] Systems with Communication on Request are proved to be computationally universal, that is, equivalent with Turing machines as long as two types of spikes are used. Following this work, further research questions are listed to be open problems.

  10. Neural Network to Solve Concave Games

    OpenAIRE

    Liu, Zixin; Wang, Nengfa

    2014-01-01

    The issue on neural network method to solve concave games is concerned. Combined with variational inequality, Ky Fan inequality, and projection equation, concave games are transformed into a neural network model. On the basis of the Lyapunov stable theory, some stability results are also given. Finally, two classic games’ simulation results are given to illustrate the theoretical results.

  11. An introduction to neural network methods for differential equations

    CERN Document Server

    Yadav, Neha; Kumar, Manoj

    2015-01-01

    This book introduces a variety of neural network methods for solving differential equations arising in science and engineering. The emphasis is placed on a deep understanding of the neural network techniques, which has been presented in a mostly heuristic and intuitive manner. This approach will enable the reader to understand the working, efficiency and shortcomings of each neural network technique for solving differential equations. The objective of this book is to provide the reader with a sound understanding of the foundations of neural networks, and a comprehensive introduction to neural network methods for solving differential equations together with recent developments in the techniques and their applications. The book comprises four major sections. Section I consists of a brief overview of differential equations and the relevant physical problems arising in science and engineering. Section II illustrates the history of neural networks starting from their beginnings in the 1940s through to the renewed...

  12. Neural network error correction for solving coupled ordinary differential equations

    Science.gov (United States)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  13. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  14. Face recognition based on improved BP neural network

    Directory of Open Access Journals (Sweden)

    Yue Gaili

    2017-01-01

    Full Text Available In order to improve the recognition rate of face recognition, face recognition algorithm based on histogram equalization, PCA and BP neural network is proposed. First, the face image is preprocessed by histogram equalization. Then, the classical PCA algorithm is used to extract the features of the histogram equalization image, and extract the principal component of the image. And then train the BP neural network using the trained training samples. This improved BP neural network weight adjustment method is used to train the network because the conventional BP algorithm has the disadvantages of slow convergence, easy to fall into local minima and training process. Finally, the BP neural network with the test sample input is trained to classify and identify the face images, and the recognition rate is obtained. Through the use of ORL database face image simulation experiment, the analysis results show that the improved BP neural network face recognition method can effectively improve the recognition rate of face recognition.

  15. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological decomposition of pollutants in the reactor. The neural network has been trained with experimental data ...

  16. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  17. The response of early neural genes to FGF signaling or inhibition of BMP indicate the absence of a conserved neural induction module

    Directory of Open Access Journals (Sweden)

    Rogers Crystal D

    2011-12-01

    Full Text Available Abstract Background The molecular mechanism that initiates the formation of the vertebrate central nervous system has long been debated. Studies in Xenopus and mouse demonstrate that inhibition of BMP signaling is sufficient to induce neural tissue in explants or ES cells respectively, whereas studies in chick argue that instructive FGF signaling is also required for the expression of neural genes. Although additional signals may be involved in neural induction and patterning, here we focus on the roles of BMP inhibition and FGF8a. Results To address the question of necessity and sufficiency of BMP inhibition and FGF signaling, we compared the temporal expression of the five earliest genes expressed in the neuroectoderm and determined their requirements for induction at the onset of neural plate formation in Xenopus. Our results demonstrate that the onset and peak of expression of the genes vary and that they have different regulatory requirements and are therefore unlikely to share a conserved neural induction regulatory module. Even though all require inhibition of BMP for expression, some also require FGF signaling; expression of the early-onset pan-neural genes sox2 and foxd5α requires FGF signaling while other early genes, sox3, geminin and zicr1 are induced by BMP inhibition alone. Conclusions We demonstrate that BMP inhibition and FGF signaling induce neural genes independently of each other. Together our data indicate that although the spatiotemporal expression patterns of early neural genes are similar, the mechanisms involved in their expression are distinct and there are different signaling requirements for the expression of each gene.

  18. The neural subjective frame: from bodily signals to perceptual consciousness.

    Science.gov (United States)

    Park, Hyeong-Dong; Tallon-Baudry, Catherine

    2014-05-05

    The report 'I saw the stimulus' operationally defines visual consciousness, but where does the 'I' come from? To account for the subjective dimension of perceptual experience, we introduce the concept of the neural subjective frame. The neural subjective frame would be based on the constantly updated neural maps of the internal state of the body and constitute a neural referential from which first person experience can be created. We propose to root the neural subjective frame in the neural representation of visceral information which is transmitted through multiple anatomical pathways to a number of target sites, including posterior insula, ventral anterior cingulate cortex, amygdala and somatosensory cortex. We review existing experimental evidence showing that the processing of external stimuli can interact with visceral function. The neural subjective frame is a low-level building block of subjective experience which is not explicitly experienced by itself which is necessary but not sufficient for perceptual experience. It could also underlie other types of subjective experiences such as self-consciousness and emotional feelings. Because the neural subjective frame is tightly linked to homeostatic regulations involved in vigilance, it could also make a link between state and content consciousness.

  19. Optimal multiple-information integration inherent in a ring neural network

    International Nuclear Information System (INIS)

    Takiyama, Ken

    2017-01-01

    Although several behavioral experiments have suggested that our neural system integrates multiple sources of information based on the certainty of each type of information in the manner of maximum-likelihood estimation, it is unclear how the maximum-likelihood estimation is implemented in our neural system. Here, I investigate the relationship between maximum-likelihood estimation and a widely used ring-type neural network model that is used as a model of visual, motor, or prefrontal cortices. Without any approximation or ansatz, I analytically demonstrate that the equilibrium of an order parameter in the neural network model exactly corresponds to the maximum-likelihood estimation when the strength of the symmetrical recurrent synaptic connectivity within a neural population is appropriately stronger than that of asymmetrical connectivity, that of local and external inputs, and that of symmetrical or asymmetrical connectivity between different neural populations. In this case, strengths of local and external inputs or those of symmetrical connectivity between different neural populations exactly correspond to the input certainty in maximum-likelihood estimation. Thus, my analysis suggests appropriately strong symmetrical recurrent connectivity as a possible candidate for implementing the maximum-likelihood estimation within our neural system. (paper)

  20. Photosensitive-polyimide based method for fabricating various neural electrode architectures

    Directory of Open Access Journals (Sweden)

    Yasuhiro X Kato

    2012-06-01

    Full Text Available An extensive photosensitive polyimide (PSPI-based method for designing and fabricating various neural electrode architectures was developed. The method aims to broaden the design flexibility and expand the fabrication capability for neural electrodes to improve the quality of recorded signals and integrate other functions. After characterizing PSPI’s properties for micromachining processes, we successfully designed and fabricated various neural electrodes even on a non-flat substrate using only one PSPI as an insulation material and without the time-consuming dry etching processes. The fabricated neural electrodes were an electrocorticogram electrode, a mesh intracortical electrode with a unique lattice-like mesh structure to fixate neural tissue, and a guide cannula electrode with recording microelectrodes placed on the curved surface of a guide cannula as a microdialysis probe. In vivo neural recordings using anesthetized rats demonstrated that these electrodes can be used to record neural activities repeatedly without any breakage and mechanical failures, which potentially promises stable recordings for long periods of time. These successes make us believe that this PSPI-based fabrication is a powerful method, permitting flexible design and easy optimization of electrode architectures for a variety of electrophysiological experimental research with improved neural recording performance.

  1. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan

    2015-04-01

    Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.

  2. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  3. AKT signaling displays multifaceted functions in neural crest development.

    Science.gov (United States)

    Sittewelle, Méghane; Monsoro-Burq, Anne H

    2018-05-31

    AKT signaling is an essential intracellular pathway controlling cell homeostasis, cell proliferation and survival, as well as cell migration and differentiation in adults. Alterations impacting the AKT pathway are involved in many pathological conditions in human disease. Similarly, during development, multiple transmembrane molecules, such as FGF receptors, PDGF receptors or integrins, activate AKT to control embryonic cell proliferation, migration, differentiation, and also cell fate decisions. While many studies in mouse embryos have clearly implicated AKT signaling in the differentiation of several neural crest derivatives, information on AKT functions during the earliest steps of neural crest development had remained relatively scarce until recently. However, recent studies on known and novel regulators of AKT signaling demonstrate that this pathway plays critical roles throughout the development of neural crest progenitors. Non-mammalian models such as fish and frog embryos have been instrumental to our understanding of AKT functions in neural crest development, both in neural crest progenitors and in the neighboring tissues. This review combines current knowledge acquired from all these different vertebrate animal models to describe the various roles of AKT signaling related to neural crest development in vivo. We first describe the importance of AKT signaling in patterning the tissues involved in neural crest induction, namely the dorsal mesoderm and the ectoderm. We then focus on AKT signaling functions in neural crest migration and differentiation. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Altered Synchronizations among Neural Networks in Geriatric Depression.

    Science.gov (United States)

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  5. Pulsed neural networks consisting of single-flux-quantum spiking neurons

    International Nuclear Information System (INIS)

    Hirose, T.; Asai, T.; Amemiya, Y.

    2007-01-01

    An inhibitory pulsed neural network was developed for brain-like information processing, by using single-flux-quantum (SFQ) circuits. It consists of spiking neuron devices that are coupled to each other through all-to-all inhibitory connections. The network selects neural activity. The operation of the neural network was confirmed by computer simulation. SFQ neuron devices can imitate the operation of the inhibition phenomenon of neural networks

  6. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  7. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  8. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  9. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    OpenAIRE

    Jerzy Balicki; Piotr Dryja; Waldemar Korłub; Piotr Przybyłek; Maciej Tyszka; Marcin Zadroga; Marcin Zakidalski

    2016-01-01

    Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  10. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2016-06-01

    Full Text Available Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  11. Metabolic neural mapping in neonatal rats

    International Nuclear Information System (INIS)

    DiRocco, R.J.; Hall, W.G.

    1981-01-01

    Functional neural mapping by 14 C-deoxyglucose autoradiography in adult rats has shown that increases in neural metabolic rate that are coupled to increased neurophysiological activity are more evident in axon terminals and dendrites than neuron cell bodies. Regions containing architectonically well-defined concentrations of terminals and dendrites (neuropil) have high metabolic rates when the neuropil is physiologically active. In neonatal rats, however, we find that regions containing well-defined groupings of neuron cell bodies have high metabolic rates in 14 C-deoxyglucose autoradiograms. The striking difference between the morphological appearance of 14 C-deoxyglucose autoradiograms obtained from neonatal and adult rats is probably related to developmental changes in morphometric features of differentiating neurons, as well as associated changes in type and locus of neural work performed

  12. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  13. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  14. Using Brain Stimulation to Disentangle Neural Correlates of Conscious Vision

    Directory of Open Access Journals (Sweden)

    Tom Alexander de Graaf

    2014-09-01

    Full Text Available Research into the neural correlates of consciousness (NCCs has blossomed, due to the advent of new and increasingly sophisticated brain research tools. Neuroimaging has uncovered a variety of brain processes that relate to conscious perception, obtained in a range of experimental paradigms. But methods such as fMRI or EEG do not always afford inference on the role these brain processes play in conscious vision. Such empirical neural correlates of consciousness could reflect neural prerequisites, neural consequences, or neural substrates of a conscious experience. Here, we take a closer look at the use of non-invasive brain stimulation (NIBS techniques in this context. We discuss and review how NIBS methodology can enlighten our understanding of brain mechanisms underlying conscious vision by disentangling the empirical neural correlates of consciousness.

  15. Pattern recognition of state variables by neural networks

    International Nuclear Information System (INIS)

    Faria, Eduardo Fernandes; Pereira, Claubia

    1996-01-01

    An artificial intelligence system based on artificial neural networks can be used to classify predefined events and emergency procedures. These systems are being used in different areas. In the nuclear reactors safety, the goal is the classification of events whose data can be processed and recognized by neural networks. In this works we present a preliminary simple system, using neural networks in the recognition of patterns the recognition of variables which define a situation. (author)

  16. Classification of behavior using unsupervised temporal neural networks

    International Nuclear Information System (INIS)

    Adair, K.L.

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem

  17. Understanding the Implications of Neural Population Activity on Behavior

    Science.gov (United States)

    Briguglio, John

    Learning how neural activity in the brain leads to the behavior we exhibit is one of the fundamental questions in Neuroscience. In this dissertation, several lines of work are presented to that use principles of neural coding to understand behavior. In one line of work, we formulate the efficient coding hypothesis in a non-traditional manner in order to test human perceptual sensitivity to complex visual textures. We find a striking agreement between how variable a particular texture signal is and how sensitive humans are to its presence. This reveals that the efficient coding hypothesis is still a guiding principle for neural organization beyond the sensory periphery, and that the nature of cortical constraints differs from the peripheral counterpart. In another line of work, we relate frequency discrimination acuity to neural responses from auditory cortex in mice. It has been previously observed that optogenetic manipulation of auditory cortex, in addition to changing neural responses, evokes changes in behavioral frequency discrimination. We are able to account for changes in frequency discrimination acuity on an individual basis by examining the Fisher information from the neural population with and without optogenetic manipulation. In the third line of work, we address the question of what a neural population should encode given that its inputs are responses from another group of neurons. Drawing inspiration from techniques in machine learning, we train Deep Belief Networks on fake retinal data and show the emergence of Garbor-like filters, reminiscent of responses in primary visual cortex. In the last line of work, we model the state of a cortical excitatory-inhibitory network during complex adaptive stimuli. Using a rate model with Wilson-Cowan dynamics, we demonstrate that simple non-linearities in the signal transferred from inhibitory to excitatory neurons can account for real neural recordings taken from auditory cortex. This work establishes and tests

  18. Neural bases of congenital amusia in tonal language speakers.

    Science.gov (United States)

    Zhang, Caicai; Peng, Gang; Shao, Jing; Wang, William S-Y

    2017-03-01

    Congenital amusia is a lifelong neurodevelopmental disorder of fine-grained pitch processing. In this fMRI study, we examined the neural bases of congenial amusia in speakers of a tonal language - Cantonese. Previous studies on non-tonal language speakers suggest that the neural deficits of congenital amusia lie in the music-selective neural circuitry in the right inferior frontal gyrus (IFG). However, it is unclear whether this finding can generalize to congenital amusics in tonal languages. Tonal language experience has been reported to shape the neural processing of pitch, which raises the question of how tonal language experience affects the neural bases of congenital amusia. To investigate this question, we examined the neural circuitries sub-serving the processing of relative pitch interval in pitch-matched Cantonese level tone and musical stimuli in 11 Cantonese-speaking amusics and 11 musically intact controls. Cantonese-speaking amusics exhibited abnormal brain activities in a widely distributed neural network during the processing of lexical tone and musical stimuli. Whereas the controls exhibited significant activation in the right superior temporal gyrus (STG) in the lexical tone condition and in the cerebellum regardless of the lexical tone and music conditions, no activation was found in the amusics in those regions, which likely reflects a dysfunctional neural mechanism of relative pitch processing in the amusics. Furthermore, the amusics showed abnormally strong activation of the right middle frontal gyrus and precuneus when the pitch stimuli were repeated, which presumably reflect deficits of attending to repeated pitch stimuli or encoding them into working memory. No significant group difference was found in the right IFG in either the whole-brain analysis or region-of-interest analysis. These findings imply that the neural deficits in tonal language speakers might differ from those in non-tonal language speakers, and overlap partly with the

  19. Neural Cell Chip Based Electrochemical Detection of Nanotoxicity.

    Science.gov (United States)

    Kafi, Md Abdul; Cho, Hyeon-Yeol; Choi, Jeong Woo

    2015-07-02

    Development of a rapid, sensitive and cost-effective method for toxicity assessment of commonly used nanoparticles is urgently needed for the sustainable development of nanotechnology. A neural cell with high sensitivity and conductivity has become a potential candidate for a cell chip to investigate toxicity of environmental influences. A neural cell immobilized on a conductive surface has become a potential tool for the assessment of nanotoxicity based on electrochemical methods. The effective electrochemical monitoring largely depends on the adequate attachment of a neural cell on the chip surfaces. Recently, establishment of integrin receptor specific ligand molecules arginine-glycine-aspartic acid (RGD) or its several modifications RGD-Multi Armed Peptide terminated with cysteine (RGD-MAP-C), C(RGD)₄ ensure farm attachment of neural cell on the electrode surfaces either in their two dimensional (dot) or three dimensional (rod or pillar) like nano-scale arrangement. A three dimensional RGD modified electrode surface has been proven to be more suitable for cell adhesion, proliferation, differentiation as well as electrochemical measurement. This review discusses fabrication as well as electrochemical measurements of neural cell chip with particular emphasis on their use for nanotoxicity assessments sequentially since inception to date. Successful monitoring of quantum dot (QD), graphene oxide (GO) and cosmetic compound toxicity using the newly developed neural cell chip were discussed here as a case study. This review recommended that a neural cell chip established on a nanostructured ligand modified conductive surface can be a potential tool for the toxicity assessments of newly developed nanomaterials prior to their use on biology or biomedical technologies.

  20. Neural Cell Chip Based Electrochemical Detection of Nanotoxicity

    Directory of Open Access Journals (Sweden)

    Md. Abdul Kafi

    2015-07-01

    Full Text Available Development of a rapid, sensitive and cost-effective method for toxicity assessment of commonly used nanoparticles is urgently needed for the sustainable development of nanotechnology. A neural cell with high sensitivity and conductivity has become a potential candidate for a cell chip to investigate toxicity of environmental influences. A neural cell immobilized on a conductive surface has become a potential tool for the assessment of nanotoxicity based on electrochemical methods. The effective electrochemical monitoring largely depends on the adequate attachment of a neural cell on the chip surfaces. Recently, establishment of integrin receptor specific ligand molecules arginine-glycine-aspartic acid (RGD or its several modifications RGD-Multi Armed Peptide terminated with cysteine (RGD-MAP-C, C(RGD4 ensure farm attachment of neural cell on the electrode surfaces either in their two dimensional (dot or three dimensional (rod or pillar like nano-scale arrangement. A three dimensional RGD modified electrode surface has been proven to be more suitable for cell adhesion, proliferation, differentiation as well as electrochemical measurement. This review discusses fabrication as well as electrochemical measurements of neural cell chip with particular emphasis on their use for nanotoxicity assessments sequentially since inception to date. Successful monitoring of quantum dot (QD, graphene oxide (GO and cosmetic compound toxicity using the newly developed neural cell chip were discussed here as a case study. This review recommended that a neural cell chip established on a nanostructured ligand modified conductive surface can be a potential tool for the toxicity assessments of newly developed nanomaterials prior to their use on biology or biomedical technologies.

  1. A Chip for an Implantable Neural Stimulator

    DEFF Research Database (Denmark)

    Gudnason, Gunnar; Bruun, Erik; Haugland, Morten

    2000-01-01

    This paper describes a chip for a multichannel neural stimulator for functional electrical stimulation (FES). The purpose of FES is to restore muscular control in disabled patients. The chip performs all the signal processing required in an implanted neural stimulator. The power and digital data...

  2. The neural network approach to parton fitting

    International Nuclear Information System (INIS)

    Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea

    2005-01-01

    We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits

  3. Windowed active sampling for reliable neural learning

    NARCIS (Netherlands)

    Barakova, E.I; Spaanenburg, L

    The composition of the example set has a major impact on the quality of neural learning. The popular approach is focused on extensive pre-processing to bridge the representation gap between process measurement and neural presentation. In contrast, windowed active sampling attempts to solve these

  4. Neural networks in signal processing

    International Nuclear Information System (INIS)

    Govil, R.

    2000-01-01

    Nuclear Engineering has matured during the last decade. In research and design, control, supervision, maintenance and production, mathematical models and theories are used extensively. In all such applications signal processing is embedded in the process. Artificial Neural Networks (ANN), because of their nonlinear, adaptive nature are well suited to such applications where the classical assumptions of linearity and second order Gaussian noise statistics cannot be made. ANN's can be treated as nonparametric techniques, which can model an underlying process from example data. They can also adopt their model parameters to statistical change with time. Algorithms in the framework of Neural Networks in Signal processing have found new applications potentials in the field of Nuclear Engineering. This paper reviews the fundamentals of Neural Networks in signal processing and their applications in tasks such as recognition/identification and control. The topics covered include dynamic modeling, model based ANN's, statistical learning, eigen structure based processing and generalization structures. (orig.)

  5. Principles of neural information processing

    CERN Document Server

    Seelen, Werner v

    2016-01-01

    In this fundamental book the authors devise a framework that describes the working of the brain as a whole. It presents a comprehensive introduction to the principles of Neural Information Processing as well as recent and authoritative research. The books´ guiding principles are the main purpose of neural activity, namely, to organize behavior to ensure survival, as well as the understanding of the evolutionary genesis of the brain. Among the developed principles and strategies belong self-organization of neural systems, flexibility, the active interpretation of the world by means of construction and prediction as well as their embedding into the world, all of which form the framework of the presented description. Since, in brains, their partial self-organization, the lifelong adaptation and their use of various methods of processing incoming information are all interconnected, the authors have chosen not only neurobiology and evolution theory as a basis for the elaboration of such a framework, but also syst...

  6. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  7. Applications of neural network to numerical analyses

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki; Fukuhara, Makoto; Ma, Xiao-Feng; Liaqat, Ali

    1999-01-01

    Applications of a multi-layer neural network to numerical analyses are described. We are mainly concerned with the computed tomography and the solution of differential equations. In both cases as the objective functions for the training process of the neural network we employed residuals of the integral equation or the differential equations. This is different from the conventional neural network training where sum of the squared errors of the output values is adopted as the objective function. For model problems both the methods gave satisfactory results and the methods are considered promising for some kind of problems. (author)

  8. Multistability in bidirectional associative memory neural networks

    International Nuclear Information System (INIS)

    Huang Gan; Cao Jinde

    2008-01-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2n-dimensional networks can have 3 n equilibria and 2 n equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results

  9. Multistability in bidirectional associative memory neural networks

    Science.gov (United States)

    Huang, Gan; Cao, Jinde

    2008-04-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2 n-dimensional networks can have 3 equilibria and 2 equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results.

  10. A neural network approach to burst detection.

    Science.gov (United States)

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.

  11. Aneurisma de aorta abdominal y fistula aorto-cava

    Directory of Open Access Journals (Sweden)

    Juan Jose Gonzalez Soler

    2012-03-01

    Full Text Available Paciente de 67 años de edad que acude al servicio de urgencias por dolor dorsolumbar intenso de 4 horas de evolu-ción. Se trata de un paciente con hipoacusia neurosensorial severa que dificulta la anamnesis. Se desconocen factores de riesgo cardiovascular por no constar seguimiento médico habitual.

  12. Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.

    Science.gov (United States)

    Wan, Ying; Cao, Jinde; Wen, Guanghui

    In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control

  13. Large-scale multielectrode recording and stimulation of neural activity

    International Nuclear Information System (INIS)

    Sher, A.; Chichilnisky, E.J.; Dabrowski, W.; Grillo, A.A.; Grivich, M.; Gunning, D.; Hottowy, P.; Kachiguine, S.; Litke, A.M.; Mathieson, K.; Petrusca, D.

    2007-01-01

    Large circuits of neurons are employed by the brain to encode and process information. How this encoding and processing is carried out is one of the central questions in neuroscience. Since individual neurons communicate with each other through electrical signals (action potentials), the recording of neural activity with arrays of extracellular electrodes is uniquely suited for the investigation of this question. Such recordings provide the combination of the best spatial (individual neurons) and temporal (individual action-potentials) resolutions compared to other large-scale imaging methods. Electrical stimulation of neural activity in turn has two very important applications: it enhances our understanding of neural circuits by allowing active interactions with them, and it is a basis for a large variety of neural prosthetic devices. Until recently, the state-of-the-art in neural activity recording systems consisted of several dozen electrodes with inter-electrode spacing ranging from tens to hundreds of microns. Using silicon microstrip detector expertise acquired in the field of high-energy physics, we created a unique neural activity readout and stimulation framework that consists of high-density electrode arrays, multi-channel custom-designed integrated circuits, a data acquisition system, and data-processing software. Using this framework we developed a number of neural readout and stimulation systems: (1) a 512-electrode system for recording the simultaneous activity of as many as hundreds of neurons, (2) a 61-electrode system for electrical stimulation and readout of neural activity in retinas and brain-tissue slices, and (3) a system with telemetry capabilities for recording neural activity in the intact brain of awake, naturally behaving animals. We will report on these systems, their various applications to the field of neurobiology, and novel scientific results obtained with some of them. We will also outline future directions

  14. Burst firing enhances neural output correlation

    Directory of Open Access Journals (Sweden)

    Ho Ka eChan

    2016-05-01

    Full Text Available Neurons communicate and transmit information predominantly through spikes. Given that experimentally observed neural spike trains in a variety of brain areas can be highly correlated, it is important to investigate how neurons process correlated inputs. Most previous work in this area studied the problem of correlation transfer analytically by making significant simplifications on neural dynamics. Temporal correlation between inputs that arises from synaptic filtering, for instance, is often ignored when assuming that an input spike can at most generate one output spike. Through numerical simulations of a pair of leaky integrate-and-fire (LIF neurons receiving correlated inputs, we demonstrate that neurons in the presence of synaptic filtering by slow synapses exhibit strong output correlations. We then show that burst firing plays a central role in enhancing output correlations, which can explain the above-mentioned observation because synaptic filtering induces bursting. The observed changes of correlations are mostly on a long time scale. Our results suggest that other features affecting the prevalence of neural burst firing in biological neurons, e.g., adaptive spiking mechanisms, may play an important role in modulating the overall level of correlations in neural networks.

  15. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  16. Therapeutic physical exercise in neural injury: friend or foe?

    Science.gov (United States)

    Park, Kanghui; Lee, Seunghoon; Hong, Yunkyung; Park, Sookyoung; Choi, Jeonghyun; Chang, Kyu-Tae; Kim, Joo-Heon; Hong, Yonggeun

    2015-12-01

    [Purpose] The intensity of therapeutic physical exercise is complex and sometimes controversial in patients with neural injuries. This review assessed whether therapeutic physical exercise is beneficial according to the intensity of the physical exercise. [Methods] The authors identified clinically or scientifically relevant articles from PubMed that met the inclusion criteria. [Results] Exercise training can improve body strength and lead to the physiological adaptation of skeletal muscles and the nervous system after neural injuries. Furthermore, neurophysiological and neuropathological studies show differences in the beneficial effects of forced therapeutic exercise in patients with severe or mild neural injuries. Forced exercise alters the distribution of muscle fiber types in patients with neural injuries. Based on several animal studies, forced exercise may promote functional recovery following cerebral ischemia via signaling molecules in ischemic brain regions. [Conclusions] This review describes several types of therapeutic forced exercise and the controversy regarding the therapeutic effects in experimental animals versus humans with neural injuries. This review also provides a therapeutic strategy for physical therapists that grades the intensity of forced exercise according to the level of neural injury.

  17. Periodicity and stability for variable-time impulsive neural networks.

    Science.gov (United States)

    Li, Hongfei; Li, Chuandong; Huang, Tingwen

    2017-10-01

    The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks. Furthermore, a series of sufficient criteria are derived to ensure the existence and global exponential stability of periodic solution of variable-time impulsive neural networks, and to illustrate the same stability properties between variable-time impulsive neural networks and the fixed-time ones. The new criteria are established by applying Schaefer's fixed point theorem combined with the use of inequality technique. Finally, a numerical example is presented to show the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Region stability analysis and tracking control of memristive recurrent neural network.

    Science.gov (United States)

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer ... N-hexane (HPLC grade) was purchased from. Fisher Scientific. ..... Simultaneous Quantification of Seven Flavonoids in.

  20. Neural Crossroads in the Hematopoietic Stem Cell Niche.

    Science.gov (United States)

    Agarwala, Sobhika; Tamplin, Owen J

    2018-05-29

    The hematopoietic stem cell (HSC) niche supports steady-state hematopoiesis and responds to changing needs during stress and disease. The nervous system is an important regulator of the niche, and its influence is established early in development when stem cells are specified. Most research has focused on direct innervation of the niche, however recent findings show there are different modes of neural control, including globally by the central nervous system (CNS) and hormone release, locally by neural crest-derived mesenchymal stem cells, and intrinsically by hematopoietic cells that express neural receptors and neurotransmitters. Dysregulation between neural and hematopoietic systems can contribute to disease, however new therapeutic opportunities may be found among neuroregulator drugs repurposed to support hematopoiesis. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. A neural network model for credit risk evaluation.

    Science.gov (United States)

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  2. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  3. Improved transformer protection using probabilistic neural network ...

    African Journals Online (AJOL)

    user

    secure and dependable protection for power transformers. Owing to its superior learning and generalization capabilities Artificial. Neural Network (ANN) can considerably enhance the scope of WI method. ANN approach is faster, robust and easier to implement than the conventional waveform approach. The use of neural ...

  4. Deciphering the Cognitive and Neural Mechanisms Underlying ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Deciphering the Cognitive and Neural Mechanisms Underlying Auditory Learning. This project seeks to understand the brain mechanisms necessary for people to learn to perceive sounds. Neural circuits and learning. The research team will test people with and without musical training to evaluate their capacity to learn ...

  5. Weaving and neural complexity in symmetric quantum states

    Science.gov (United States)

    Susa, Cristian E.; Girolami, Davide

    2018-04-01

    We study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.

  6. Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language

    Science.gov (United States)

    Tanadi, Theo

    2018-03-01

    Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.

  7. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  8. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  9. Bio-inspired spiking neural network for nonlinear systems control.

    Science.gov (United States)

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. One weird trick for parallelizing convolutional neural networks

    OpenAIRE

    Krizhevsky, Alex

    2014-01-01

    I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

  11. An Overview of Bayesian Methods for Neural Spike Train Analysis

    Directory of Open Access Journals (Sweden)

    Zhe Chen

    2013-01-01

    Full Text Available Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity. Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels. On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation. On the application side, the topics include spike sorting, tuning curve estimation, neural encoding and decoding, deconvolution of spike trains from calcium imaging signals, and inference of neuronal functional connectivity and synchrony. Some research challenges and opportunities for neural spike train analysis are discussed.

  12. A review of organic and inorganic biomaterials for neural interfaces.

    Science.gov (United States)

    Fattahi, Pouria; Yang, Guang; Kim, Gloria; Abidian, Mohammad Reza

    2014-03-26

    Recent advances in nanotechnology have generated wide interest in applying nanomaterials for neural prostheses. An ideal neural interface should create seamless integration into the nervous system and performs reliably for long periods of time. As a result, many nanoscale materials not originally developed for neural interfaces become attractive candidates to detect neural signals and stimulate neurons. In this comprehensive review, an overview of state-of-the-art microelectrode technologies provided fi rst, with focus on the material properties of these microdevices. The advancements in electro active nanomaterials are then reviewed, including conducting polymers, carbon nanotubes, graphene, silicon nanowires, and hybrid organic-inorganic nanomaterials, for neural recording, stimulation, and growth. Finally, technical and scientific challenges are discussed regarding biocompatibility, mechanical mismatch, and electrical properties faced by these nanomaterials for the development of long-lasting functional neural interfaces.

  13. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, R.; Pentia, M.

    1997-01-01

    In experimental particle physics, pattern recognition problems, specifically for neural network methods, occur frequently in track finding or feature extraction. Track finding is a combinatorial optimization problem. Given a set of points in Euclidean space, one tries the reconstruction of particle trajectories, subject to smoothness constraints.The basic ingredients in a neural network are the N binary neurons and the synaptic strengths connecting them. In our case the neurons are the segments connecting all possible point pairs.The dynamics of the neural network is given by a local updating rule wich evaluates for each neuron the sign of the 'upstream activity'. An updating rule in the form of sigmoid function is given. The synaptic strengths are defined in terms of angle between the segments and the lengths of the segments implied in the track reconstruction. An algorithm based on Hopfield neural network has been developed and tested on the track coordinates measured by silicon microstrip tracking system

  14. δ-Protocadherins: Organizers of neural circuit assembly.

    Science.gov (United States)

    Light, Sarah E W; Jontes, James D

    2017-09-01

    The δ-protocadherins comprise a small family of homophilic cell adhesion molecules within the larger cadherin superfamily. They are essential for neural development as mutations in these molecules give rise to human neurodevelopmental disorders, such as schizophrenia and epilepsy, and result in behavioral defects in animal models. Despite their importance to neural development, a detailed understanding of their mechanisms and the ways in which their loss leads to changes in neural function is lacking. However, recent results have begun to reveal roles for the δ-protocadherins in both regulation of neurogenesis and lineage-dependent circuit assembly, as well as in contact-dependent motility and selective axon fasciculation. These evolutionarily conserved mechanisms could have a profound impact on the robust assembly of the vertebrate nervous system. Future work should be focused on unraveling the molecular mechanisms of the δ-protocadherins and understanding how this family functions broadly to regulate neural development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Rodent Zic Genes in Neural Network Wiring.

    Science.gov (United States)

    Herrera, Eloísa

    2018-01-01

    The formation of the nervous system is a multistep process that yields a mature brain. Failure in any of the steps of this process may cause brain malfunction. In the early stages of embryonic development, neural progenitors quickly proliferate and then, at a specific moment, differentiate into neurons or glia. Once they become postmitotic neurons, they migrate to their final destinations and begin to extend their axons to connect with other neurons, sometimes located in quite distant regions, to establish different neural circuits. During the last decade, it has become evident that Zic genes, in addition to playing important roles in early development (e.g., gastrulation and neural tube closure), are involved in different processes of late brain development, such as neuronal migration, axon guidance, and refinement of axon terminals. ZIC proteins are therefore essential for the proper wiring and connectivity of the brain. In this chapter, we review our current knowledge of the role of Zic genes in the late stages of neural circuit formation.

  16. Motor activation SPECT for the neurosurgical diseases. Examination protocol and basic study

    Energy Technology Data Exchange (ETDEWEB)

    Noguchi, Hiroshi; Kawaguchi, Shoichiro; Sakaki, Toshisuke; Imai, Teruhiko; Ohishi, Hajime [Nara Medical Univ., Kashihara (Japan)

    1999-07-01

    We examined and analyzed the region activated by the unilateral finger opposition task using motor activation single photon emission computed tomography (M-SPECT). M-SPECT studies were carried out on 11 cases, all of whom were normal volunteers (mean age: 49.4 years), none of whom showed any abnormal findings on magnetic resonance images (MRIs) or any neurological abnormalities. The SPECT images for each case were superimposed on the MRIs using Image Fusion Software. The result of the M-SPECT study was expressed as positive or negative. The cases with a marked increase of blood flow in the sensori-motor cortex during the finger opposition task were categorized as positive, and those cases showing no marked increase of blood flow were categorized as negative. Among 11 patients, 10 cases (90.9%) showed positive M-SPECT findings, and the eleventh case showed negative M-SPECT findings. The asymmetry index (AI) was calculated on the sensorio-motor cortex in the SPECT images before and after motor activation, with the 10 cases with positive M-SPECT having an AI before motor activation of 0.99{+-}0.06 (mean{+-}standard deviation) and an AI after motor activation of 1.14{+-}0.07. This change was statistically significant (p<0.05). In the single case categorized as negative, the AI before motor activation was 1.04, and the AI after motor activation was 1.01. There was no significant difference of AI values between the resting and motor activation stages. The positive M-SPECT was seen in 90.9% of the normal volunteer series using a visual inspection method. In these cases, the blood flow in the sensorio-motor cortex significantly increased after application of the finger opposition task using the semi-quantitative method. (author)

  17. Artificial neural networks applied to forecasting time series.

    Science.gov (United States)

    Montaño Moreno, Juan J; Palmer Pol, Alfonso; Muñoz Gracia, Pilar

    2011-04-01

    This study offers a description and comparison of the main models of Artificial Neural Networks (ANN) which have proved to be useful in time series forecasting, and also a standard procedure for the practical application of ANN in this type of task. The Multilayer Perceptron (MLP), Radial Base Function (RBF), Generalized Regression Neural Network (GRNN), and Recurrent Neural Network (RNN) models are analyzed. With this aim in mind, we use a time series made up of 244 time points. A comparative study establishes that the error made by the four neural network models analyzed is less than 10%. In accordance with the interpretation criteria of this performance, it can be concluded that the neural network models show a close fit regarding their forecasting capacity. The model with the best performance is the RBF, followed by the RNN and MLP. The GRNN model is the one with the worst performance. Finally, we analyze the advantages and limitations of ANN, the possible solutions to these limitations, and provide an orientation towards future research.

  18. Advanced Applications of Neural Networks and Artificial Intelligence: A Review

    OpenAIRE

    Koushal Kumar; Gour Sundar Mitra Thakur

    2012-01-01

    Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is c...

  19. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  20. The neural crest migrating into the 21st century

    Science.gov (United States)

    Bronner, Marianne E.; Simões-Costa, Marcos

    2016-01-01

    From the initial discovery of the neural crest over 150 years ago to the seminal studies of Le Douarin and colleagues in the latter part of the 20th century, understanding of the neural crest has moved from the descriptive to the experimental. Now, in the 21st century, neural crest research has migrated into the genomic age. Here we reflect upon the major advances in neural crest biology and the open questions that will continue to make research on this incredible vertebrate cell type an important subject in developmental biology for the century to come. PMID:26970616

  1. Neural Correlates of Intolerance of Uncertainty in Clinical Disorders.

    Science.gov (United States)

    Wever, Mirjam; Smeets, Paul; Sternheim, Lot

    2015-01-01

    Intolerance of uncertainty is a key contributor to anxiety-related disorders. Recent studies highlight its importance in other clinical disorders. The link between its clinical presentation and the underlying neural correlates remains unclear. This review summarizes the emerging literature on the neural correlates of intolerance of uncertainty. In conclusion, studies focusing on the neural correlates of this construct are sparse, and findings are inconsistent across disorders. Future research should identify neural correlates of intolerance of uncertainty in more detail. This may unravel the neurobiology of a wide variety of clinical disorders and pave the way for novel therapeutic targets.

  2. Template measurement for plutonium pit based on neural networks

    International Nuclear Information System (INIS)

    Zhang Changfan; Gong Jian; Liu Suping; Hu Guangchun; Xiang Yongchun

    2012-01-01

    Template measurement for plutonium pit extracts characteristic data from-ray spectrum and the neutron counts emitted by plutonium. The characteristic data of the suspicious object are compared with data of the declared plutonium pit to verify if they are of the same type. In this paper, neural networks are enhanced as the comparison algorithm for template measurement of plutonium pit. Two kinds of neural networks are created, i.e. the BP and LVQ neural networks. They are applied in different aspects for the template measurement and identification. BP neural network is used for classification for different types of plutonium pits, which is often used for management of nuclear materials. LVQ neural network is used for comparison of inspected objects to the declared one, which is usually applied in the field of nuclear disarmament and verification. (authors)

  3. High level cognitive information processing in neural networks

    Science.gov (United States)

    Barnden, John A.; Fields, Christopher A.

    1992-01-01

    Two related research efforts were addressed: (1) high-level connectionist cognitive modeling; and (2) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic model of local neural circuits, and to understand the computational behavior of such models. In keeping with the nature of NASA's Innovative Research Program, all the work conducted under the grant was highly innovative. For instance, the following ideas, all summarized, are contributions to the study of connectionist/neural networks: (1) the temporal-winner-take-all, relative-position encoding, and pattern-similarity association techniques; (2) the importation of logical combinators into connection; (3) the use of analogy-based reasoning as a bridge across the gap between the traditional symbolic paradigm and the connectionist paradigm; and (4) the application of connectionism to the domain of belief representation/reasoning. The work on local neural circuit modeling also departs significantly from the work of related researchers. In particular, its concentration on low-level neural phenomena that could support high-level cognitive processing is unusual within the area of biological local circuit modeling, and also serves to expand the horizons of the artificial neural net field.

  4. Controlling the dynamics of multi-state neural networks

    International Nuclear Information System (INIS)

    Jin, Tao; Zhao, Hong

    2008-01-01

    In this paper, we first analyze the distribution of local fields (DLF) which is induced by the memory patterns in the Q-Ising model. It is found that the structure of the DLF is closely correlated with the network dynamics and the system performance. However, the design rule adopted in the Q-Ising model, like the other rules adopted for multi-state neural networks with associative memories, cannot be applied to directly control the DLF for a given set of memory patterns, and thus cannot be applied to further study the relationships between the structure of the DLF and the dynamics of the network. We then extend a design rule, which was presented recently for designing binary-state neural networks, to make it suitable for designing general multi-state neural networks. This rule is able to control the structure of the DLF as expected. We show that controlling the DLF not only can affect the dynamic behaviors of the multi-state neural networks for a given set of memory patterns, but also can improve the storage capacity. With the change of the DLF, the network shows very rich dynamic behaviors, such as the 'chaos phase', the 'memory phase', and the 'mixture phase'. These dynamic behaviors are also observed in the binary-state neural networks; therefore, our results imply that they may be the universal behaviors of feedback neural networks

  5. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  6. Implementation of neural networks on 'Connection Machine'

    International Nuclear Information System (INIS)

    Belmonte, Ghislain

    1990-12-01

    This report is a first approach to the notion of neural networks and their possible applications within the framework of artificial intelligence activities of the Department of Applied Mathematics of the Limeil-Valenton Research Center. The first part is an introduction to the field of neural networks; the main neural network models are described in this section. The applications of neural networks in the field of classification have mainly been studied because they could more particularly help to solve some of the decision support problems dealt with by the C.E.A. As the neural networks perform a large number of parallel operations, it was therefore logical to use a parallel architecture computer: the Connection Machine (which uses 16384 processors and is located at E.T.C.A. Arcueil). The second part presents some generalities on the parallelism and the Connection Machine, and two implementations of neural networks on Connection Machine. The first of these implementations concerns one of the most used algorithms to realize the learning of neural networks: the Gradient Retro-propagation algorithm. The second one, less common, concerns a network of neurons destined mainly to the recognition of forms: the Fukushima Neocognitron. The latter is studied by the C.E.A. of Bruyeres-le-Chatel in order to realize an embedded system (including hardened circuits) for the fast recognition of forms [fr

  7. Analysis of some meteorological parameters using artificial neural ...

    African Journals Online (AJOL)

    Analysis of some meteorological parameters using artificial neural network method for ... The mean daily data for sunshine hours, maximum temperature, cloud cover and ... The study used artificial neural networks (ANN) for the estimation.

  8. Nano-topography Enhances Communication in Neural Cells Networks

    KAUST Repository

    Onesto, V.; Cancedda, L.; Coluccio, M. L.; Nanni, M.; Pesce, M.; Malara, N.; Cesarelli, M.; Di Fabrizio, Enzo M.; Amato, F.; Gentile, F.

    2017-01-01

    Neural cells are the smallest building blocks of the central and peripheral nervous systems. Information in neural networks and cell-substrate interactions have been heretofore studied separately. Understanding whether surface nano-topography can

  9. Robust adaptive fuzzy neural tracking control for a class of unknown ...

    Indian Academy of Sciences (India)

    In this paper, an adaptive fuzzy neural controller (AFNC) for a class of unknown chaotic systems is proposed. The proposed AFNC is comprised of a fuzzy neural controller and a robust controller. The fuzzy neural controller including a fuzzy neural network identifier (FNNI) is the principal controller. The FNNI is used for ...

  10. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    Science.gov (United States)

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  11. Insights into neural crest development from studies of avian embryos

    OpenAIRE

    Gandhi, Shashank; Bronner, Marianne E.

    2018-01-01

    The neural crest is a multipotent and highly migratory cell type that contributes to many of the defining features of vertebrates, including the skeleton of the head and most of the peripheral nervous system. 150 years after the discovery of the neural crest, avian embryos remain one of the most important model organisms for studying neural crest development. In this review, we describe aspects of neural crest induction, migration and axial level differences, highlighting what is known about ...

  12. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan; Naous, Rawan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.

    2015-01-01

    . Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards

  13. Design of efficient and safe neural stimulators a multidisciplinary approach

    CERN Document Server

    van Dongen, Marijn

    2016-01-01

    This book discusses the design of neural stimulator systems which are used for the treatment of a wide variety of brain disorders such as Parkinson’s, depression and tinnitus. Whereas many existing books treating neural stimulation focus on one particular design aspect, such as the electrical design of the stimulator, this book uses a multidisciplinary approach: by combining the fields of neuroscience, electrophysiology and electrical engineering a thorough understanding of the complete neural stimulation chain is created (from the stimulation IC down to the neural cell). This multidisciplinary approach enables readers to gain new insights into stimulator design, while context is provided by presenting innovative design examples. Provides a single-source, multidisciplinary reference to the field of neural stimulation, bridging an important knowledge gap among the fields of bioelectricity, neuroscience, neuroengineering and microelectronics;Uses a top-down approach to understanding the neural activation proc...

  14. Entropy Learning in Neural Network

    Directory of Open Access Journals (Sweden)

    Geok See Ng

    2017-12-01

    Full Text Available In this paper, entropy term is used in the learning phase of a neural network.  As learning progresses, more hidden nodes get into saturation.  The early creation of such hidden nodes may impair generalisation.  Hence entropy approach is proposed to dampen the early creation of such nodes.  The entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes.  At the end of learning, the less important nodes can then be eliminated to reduce the memory requirements of the neural network.

  15. The neural cell adhesion molecule

    DEFF Research Database (Denmark)

    Berezin, V; Bock, E; Poulsen, F M

    2000-01-01

    During the past year, the understanding of the structure and function of neural cell adhesion has advanced considerably. The three-dimensional structures of several of the individual modules of the neural cell adhesion molecule (NCAM) have been determined, as well as the structure of the complex...... between two identical fragments of the NCAM. Also during the past year, a link between homophilic cell adhesion and several signal transduction pathways has been proposed, connecting the event of cell surface adhesion to cellular responses such as neurite outgrowth. Finally, the stimulation of neurite...

  16. Representation of neutron noise data using neural networks

    International Nuclear Information System (INIS)

    Korsah, K.; Damiano, B.; Wood, R.T.

    1992-01-01

    This paper describes a neural network-based method of representing neutron noise spectra using a model developed at the Oak Ridge National Laboratory (ORNL). The backpropagation neural network learned to represent neutron noise data in terms of four descriptors, and the network response matched calculated values to within 3.5 percent. These preliminary results are encouraging, and further research is directed towards the application of neural networks in a diagnostics system for the identification of the causes of changes in structural spectral resonances. This work is part of our current investigation of advanced technologies such as expert systems and neural networks for neutron noise data reduction, analysis, and interpretation. The objective is to improve the state-of-the-art of noise analysis as a diagnostic tool for nuclear power plants and other mechanical systems

  17. Modular Neural Tile Architecture for Compact Embedded Hardware Spiking Neural Network

    NARCIS (Netherlands)

    Pande, Sandeep; Morgan, Fearghal; Cawley, Seamus; Bruintjes, Tom; Smit, Gerardus Johannes Maria; McGinley, Brian; Carrillo, Snaider; Harkin, Jim; McDaid, Liam

    2013-01-01

    Biologically-inspired packet switched network on chip (NoC) based hardware spiking neural network (SNN) architectures have been proposed as an embedded computing platform for classification, estimation and control applications. Storage of large synaptic connectivity (SNN topology) information in

  18. Modeling and control of magnetorheological fluid dampers using neural networks

    Science.gov (United States)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  19. Radioactive fallout and neural tube defects

    Directory of Open Access Journals (Sweden)

    Nejat Akar

    2015-10-01

    Full Text Available Possible link between radioactivity and the occurrence of neural tube defects is a long lasting debate since the Chernobyl nuclear fallout in 1986. A recent report on the incidence of neural defects in the west coast of USA, following Fukushima disaster, brought another evidence for effect of radioactive fallout on the occurrence of NTD’s. Here a literature review was performed focusing on this special subject.

  20. Neural network monitoring of resistive welding

    International Nuclear Information System (INIS)

    Quero, J.M.; Millan, R.L.; Franquelo, L.G.; Canas, J.

    1994-01-01

    Supervision of welding processes is one of the most important and complicated tasks in production lines. Artificial Neural Networks have been applied for modeling and control of ph physical processes. In our paper we propose the use of a neural network classifier for on-line non-destructive testing. This system has been developed and installed in a resistive welding station. Results confirm the validity of this novel approach. (Author) 6 refs

  1. Neural networks. A new analytical tool, applicable also in nuclear technology

    International Nuclear Information System (INIS)

    Stritar, A.

    1992-01-01

    The basic concept of neural networks and back propagation learning algorithm are described. The behaviour of typical neural network is demonstrated on a simple graphical case. A short literature survey about the application of neural networks in nuclear science and engineering is made. The application of the neural network to the probability density calculation is shown. (author) [sl

  2. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhisheng Zhang

    2016-01-01

    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  3. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    Science.gov (United States)

    Nitta, Tohru

    2017-10-01

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  4. Aging differentially affects male and female neural stem cell neurogenic properties

    Directory of Open Access Journals (Sweden)

    Jay Waldron

    2010-09-01

    Full Text Available Jay Waldron1, Althea McCourty1, Laurent Lecanu1,21The Research Institute of the McGill University Health Centre, Montreal, Canada; 2Department of Medicine, McGill University, Montreal, Quebec, CanadaPurpose: Neural stem cell transplantation as a brain repair strategy is a very promising technology. However, despite many attempts, the clinical success remains very deceiving. Despite clear evidence that sexual dimorphism rules many aspects of human biology, the occurrence of a sex difference in neural stem cell biology is largely understudied. Herein, we propose to determine whether gender is a dimension that drives the fate of neural stem cells through aging. Should it occur, we believe that neural stem cell sexual dimorphism and its variation during aging should be taken into account to refine clinical approaches of brain repair strategies.Methods: Neural stem cells were isolated from the subventricular zone of three- and 20-month-old male and female Long-Evans rats. Expression of the estrogen receptors, ERα and ERβ, progesterone receptor, androgen receptor, and glucocorticoid receptor was analyzed and quantified by Western blotting on undifferentiated neural stem cells. A second set of neural stem cells was treated with retinoic acid to trigger differentiation, and the expression of neuronal, astroglial, and oligodendroglial markers was determined using Western blotting.Conclusion: We provided in vitro evidence that the fate of neural stem cells is affected by sex and aging. Indeed, young male neural stem cells mainly expressed markers of neuronal and oligodendroglial fate, whereas young female neural stem cells underwent differentiation towards an astroglial phenotype. Aging resulted in a lessened capacity to express neuron and astrocyte markers. Undifferentiated neural stem cells displayed sexual dimorphism in the expression of steroid receptors, in particular ERα and ERβ, and the expression level of several steroid receptors increased

  5. Issues in the use of neural networks in information retrieval

    CERN Document Server

    Iatan, Iuliana F

    2017-01-01

    This book highlights the ability of neural networks (NNs) to be excellent pattern matchers and their importance in information retrieval (IR), which is based on index term matching. The book defines a new NN-based method for learning image similarity and describes how to use fuzzy Gaussian neural networks to predict personality. It introduces the fuzzy Clifford Gaussian network, and two concurrent neural models: (1) concurrent fuzzy nonlinear perceptron modules, and (2) concurrent fuzzy Gaussian neural network modules. Furthermore, it explains the design of a new model of fuzzy nonlinear perceptron based on alpha level sets and describes a recurrent fuzzy neural network model with a learning algorithm based on the improved particle swarm optimization method.

  6. A study of reactor monitoring method with neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nabeshima, Kunihiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The purpose of this study is to investigate the methodology of Nuclear Power Plant (NPP) monitoring with neural networks, which create the plant models by the learning of the past normal operation patterns. The concept of this method is to detect the symptom of small anomalies by monitoring the deviations between the process signals measured from an actual plant and corresponding output signals from the neural network model, which might not be equal if the abnormal operational patterns are presented to the input of the neural network. Auto-associative network, which has same output as inputs, can detect an kind of anomaly condition by using normal operation data only. The monitoring tests of the feedforward neural network with adaptive learning were performed using the PWR plant simulator by which many kinds of anomaly conditions can be easily simulated. The adaptively trained feedforward network could follow the actual plant dynamics and the changes of plant condition, and then find most of the anomalies much earlier than the conventional alarm system during steady state and transient operations. Then the off-line and on-line test results during one year operation at the actual NPP (PWR) showed that the neural network could detect several small anomalies which the operators or the conventional alarm system didn't noticed. Furthermore, the sensitivity analysis suggests that the plant models by neural networks are appropriate. Finally, the simulation results show that the recurrent neural network with feedback connections could successfully model the slow behavior of the reactor dynamics without adaptive learning. Therefore, the recurrent neural network with adaptive learning will be the best choice for the actual reactor monitoring system. (author)

  7. Neural Networks for Modeling and Control of Particle Accelerators

    Science.gov (United States)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.

    2016-04-01

    Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  8. Topology influences performance in the associative memory neural networks

    International Nuclear Information System (INIS)

    Lu Jianquan; He Juan; Cao Jinde; Gao Zhiqiang

    2006-01-01

    To explore how topology affects performance within Hopfield-type associative memory neural networks (AMNNs), we studied the computational performance of the neural networks with regular lattice, random, small-world, and scale-free structures. In this Letter, we found that the memory performance of neural networks obtained through asynchronous updating from 'larger' nodes to 'smaller' nodes are better than asynchronous updating in random order, especially for the scale-free topology. The computational performance of associative memory neural networks linked by the above-mentioned network topologies with the same amounts of nodes (neurons) and edges (synapses) were studied respectively. Along with topologies becoming more random and less locally disordered, we will see that the performance of associative memory neural network is quite improved. By comparing, we show that the regular lattice and random network form two extremes in terms of patterns stability and retrievability. For a network, its patterns stability and retrievability can be largely enhanced by adding a random component or some shortcuts to its structured component. According to the conclusions of this Letter, we can design the associative memory neural networks with high performance and minimal interconnect requirements

  9. Sejarah, Penerapan, dan Analisis Resiko dari Neural Network: Sebuah Tinjauan Pustaka

    Directory of Open Access Journals (Sweden)

    Cristina Cristina

    2018-05-01

    Full Text Available A neural network is a form of artificial intelligence that has the ability to learn, grow, and adapt in a dynamic environment. Neural network began since 1890 because a great American psychologist named William James created the book "Principles of Psycology". James was the first one publish a number of facts related to the structure and function of the brain. The history of neural network development is divided into 4 epochs, the Camelot era, the Depression, the Renaissance, and the Neoconnectiosm era. Neural networks used today are not 100 percent accurate. However, neural networks are still used because of better performance than alternative computing models. The use of neural network consists of pattern recognition, signal analysis, robotics, and expert systems. For risk analysis of the neural network, it is first performed using hazards and operability studies (HAZOPS. Determining the neural network requirements in a good way will help in determining its contribution to system hazards and validating the control or mitigation of any hazards. After completion of the first stage at HAZOPS and the second stage determines the requirements, the next stage is designing. Neural network underwent repeated design-train-test development. At the design stage, the hazard analysis should consider the design aspects of the development, which include neural network architecture, size, intended use, and so on. It will be continued at the implementation stage, test phase, installation and inspection phase, operation phase, and ends at the maintenance stage.

  10. Stability prediction of berm breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    In the present study, an artificial neural network method has been applied to predict the stability of berm breakwaters. Four neural network models are constructed based on the parameters which influence the stability of breakwater. Training...

  11. High school music classes enhance the neural processing of speech.

    Science.gov (United States)

    Tierney, Adam; Krizman, Jennifer; Skoe, Erika; Johnston, Kathleen; Kraus, Nina

    2013-01-01

    Should music be a priority in public education? One argument for teaching music in school is that private music instruction relates to enhanced language abilities and neural function. However, the directionality of this relationship is unclear and it is unknown whether school-based music training can produce these enhancements. Here we show that 2 years of group music classes in high school enhance the neural encoding of speech. To tease apart the relationships between music and neural function, we tested high school students participating in either music or fitness-based training. These groups were matched at the onset of training on neural timing, reading ability, and IQ. Auditory brainstem responses were collected to a synthesized speech sound presented in background noise. After 2 years of training, the neural responses of the music training group were earlier than at pre-training, while the neural timing of students in the fitness training group was unchanged. These results represent the strongest evidence to date that in-school music education can cause enhanced speech encoding. The neural benefits of musical training are, therefore, not limited to expensive private instruction early in childhood but can be elicited by cost-effective group instruction during adolescence.

  12. Use of neural networks to monitor power plant components

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Tsoukalas, L.H.

    1992-01-01

    A new methodology is presented for nondestructive evaluation (NDE) of check valve performance and degradation. Artificial neural network (ANN) technology is utilized for processing frequency domain signatures of check valves operating in a nuclear power plant (NPP). Acoustic signatures obtained from different locations on a check valve are transformed from the time domain to the frequency domain and then used as input to a pretrained neural network. The neural network has been trained with data sets corresponding to normal operation, therefore establishing a basis for check valve satisfactory performance. Results obtained from the proposed methodology demonstrate the ability of neural networks to perform accurate and quick evaluations of check valve performance

  13. Drosophila olfactory memory: single genes to complex neural circuits.

    Science.gov (United States)

    Keene, Alex C; Waddell, Scott

    2007-05-01

    A central goal of neuroscience is to understand how neural circuits encode memory and guide behaviour. Studying simple, genetically tractable organisms, such as Drosophila melanogaster, can illuminate principles of neural circuit organization and function. Early genetic dissection of D. melanogaster olfactory memory focused on individual genes and molecules. These molecular tags subsequently revealed key neural circuits for memory. Recent advances in genetic technology have allowed us to manipulate and observe activity in these circuits, and even individual neurons, in live animals. The studies have transformed D. melanogaster from a useful organism for gene discovery to an ideal model to understand neural circuit function in memory.

  14. A novel recurrent neural network with finite-time convergence for linear programming.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  15. Deep Learning Neural Networks in Cybersecurity - Managing Malware with AI

    OpenAIRE

    Rayle, Keith

    2017-01-01

    There’s a lot of talk about the benefits of deep learning (neural networks) and how it’s the new electricity that will power us into the future. Medical diagnosis, computer vision and speech recognition are all examples of use-cases where neural networks are being applied in our everyday business environment. This begs the question…what are the uses of neural-network applications for cyber security? How does the AI process work when applying neural networks to detect malicious software bombar...

  16. Application of artificial neural network in radiographic diagnosis

    International Nuclear Information System (INIS)

    Piraino, D.; Amartur, S.; Richmond, B.; Schils, J.; Belhobek, G.

    1990-01-01

    This paper reports on an artificial neural network trained to rate the likelihood of different bone neoplasms when given a standard description of a radiograph. A three-layer back propagation algorithm was trained with descriptions of examples of bone neoplasms obtained from standard radiographic textbooks. Fifteen bone neoplasms obtained from clinical material were used as unknowns to test the trained artificial neural network. The artificial neural network correctly rated the pathologic diagnosis as the most likely diagnosis in 10 of the 15 unknown cases

  17. Noise suppress or express exponential growth for hybrid Hopfield neural networks

    International Nuclear Information System (INIS)

    Zhu Song; Shen Yi; Chen Guici

    2010-01-01

    In this Letter, we will show that noise can make the given hybrid Hopfield neural networks whose solution may grows exponentially become the new stochastic hybrid Hopfield neural networks whose solution will grows at most polynomially. On the other hand, we will also show that noise can make the given hybrid Hopfield neural networks whose solution grows at most polynomially become the new stochastic hybrid Hopfield neural networks whose solution will grows at exponentially. In other words, we will reveal that the noise can suppress or express exponential growth for hybrid Hopfield neural networks.

  18. Efeito da timpanoplastia no zumbido de pacientes com hipoacusia condutiva: seguimento de seis meses The effect of timpanoplasty on tinnitus in patients with conductive hearing loss: a six month follow-up

    Directory of Open Access Journals (Sweden)

    Adriana da Silva Lima

    2007-06-01

    Full Text Available O timpanoplastia tem como objetivos erradicar a doença da orelha média e restaurar os mecanismos de condução sonora. Contudo, alguns pacientes apresentam incômodo com o zumbido e muitas vezes questionam o médico sobre os resultados da cirurgia em relação ao zumbido. OBJETIVO: Avaliar a evolução do zumbido em pacientes com hipoacusia condutiva após timpanoplastia. Forma de Estudo: Coorte prospectiva. CASUÍSTICA E MÉTODO: Foram avaliados 23 pacientes com queixa de zumbido e diagnóstico de otite média crônica simples com indicação cirúrgica. Os pacientes foram submetidos a um protocolo de investigação médica e audiológica do zumbido antes, 30 e 180 dias após a timpanoplastia. RESULTADOS: 82,6% dos pacientes apresentaram melhora ou abolição do zumbido. Melhora significante do incômodo do zumbido no pré-operatório (5,26 em relação ao pós-operatório (1,91 com 30 e 180 dias, assim como entre o incômodo da perda auditiva pré-operatória (6,56 e pós-operatória (3,65 e 2,91. A audiometria revelou melhora do limiar tonal em todas as freqüências, com exceção de 8KHz, havendo fechamento ou gap máximo de 10dB NA em 61% dos casos. Pega total do enxerto em 78% dos casos. CONCLUSÃO: Além da melhora da perda auditiva, a timpanoplastia também proporciona bons resultados sobre o controle do zumbido.Tympanoplasty is done to eradicate ear pathology and to restore the conductive hearing mechanism (eardrum and ossicles. Some patients, however, do not tolerate tinnitus and question physicians about the results of surgery when tinnitus persists. AIM: to evaluate the progression of tinnitus in patients with conductive hearing loss after tympanoplasty. STUDY DESIGN: a prospective cohort study. Material and Methods: 23 consecutive patients with tinnitus due to chronic otitis media underwent tympanoplasty. The patients underwent a medical and audiological protocol for tinnitus before and after tympanoplasty. RESULTS: 82.6% of

  19. Prediction based chaos control via a new neural network

    International Nuclear Information System (INIS)

    Shen Liqun; Wang Mao; Liu Wanyu; Sun Guanghui

    2008-01-01

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network

  20. Neural Networks for Modeling and Control of Particle Accelerators

    CERN Document Server

    Edelen, A.L.; Chase, B.E.; Edstrom, D.; Milton, S.V.; Stabile, P.

    2016-01-01

    We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  1. The gamma model : a new neural network for temporal processing

    NARCIS (Netherlands)

    Vries, de B.

    1992-01-01

    In this paper we develop the gamma neural model, a new neural net architecture for processing of temporal patterns. Time varying patterns are normally segmented into a sequence of static patterns that are successively presented to a neural net. In the approach presented here segmentation is avoided.

  2. Neutron spectrometry with artificial neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A.; Iniguez de la Torre Bayo, M.P.; Barquero, R.; Arteaga A, T.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the χ 2 -test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  3. hmmr mediates anterior neural tube closure and morphogenesis in the frog Xenopus.

    Science.gov (United States)

    Prager, Angela; Hagenlocher, Cathrin; Ott, Tim; Schambony, Alexandra; Feistel, Kerstin

    2017-10-01

    Development of the central nervous system requires orchestration of morphogenetic processes which drive elevation and apposition of the neural folds and their fusion into a neural tube. The newly formed tube gives rise to the brain in anterior regions and continues to develop into the spinal cord posteriorly. Conspicuous differences between the anterior and posterior neural tube become visible already during neural tube closure (NTC). Planar cell polarity (PCP)-mediated convergent extension (CE) movements are restricted to the posterior neural plate, i.e. hindbrain and spinal cord, where they propagate neural fold apposition. The lack of CE in the anterior neural plate correlates with a much slower mode of neural fold apposition anteriorly. The morphogenetic processes driving anterior NTC have not been addressed in detail. Here, we report a novel role for the breast cancer susceptibility gene and microtubule (MT) binding protein Hmmr (Hyaluronan-mediated motility receptor, RHAMM) in anterior neurulation and forebrain development in Xenopus laevis. Loss of hmmr function resulted in a lack of telencephalic hemisphere separation, arising from defective roof plate formation, which in turn was caused by impaired neural tissue narrowing. hmmr regulated polarization of neural cells, a function which was dependent on the MT binding domains. hmmr cooperated with the core PCP component vangl2 in regulating cell polarity and neural morphogenesis. Disrupted cell polarization and elongation in hmmr and vangl2 morphants prevented radial intercalation (RI), a cell behavior essential for neural morphogenesis. Our results pinpoint a novel role of hmmr in anterior neural development and support the notion that RI is a major driving force for anterior neurulation and forebrain morphogenesis. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Proposal for an All-Spin Artificial Neural Network: Emulating Neural and Synaptic Functionalities Through Domain Wall Motion in Ferromagnets.

    Science.gov (United States)

    Sengupta, Abhronil; Shim, Yong; Roy, Kaushik

    2016-12-01

    Non-Boolean computing based on emerging post-CMOS technologies can potentially pave the way for low-power neural computing platforms. However, existing work on such emerging neuromorphic architectures have either focused on solely mimicking the neuron, or the synapse functionality. While memristive devices have been proposed to emulate biological synapses, spintronic devices have proved to be efficient at performing the thresholding operation of the neuron at ultra-low currents. In this work, we propose an All-Spin Artificial Neural Network where a single spintronic device acts as the basic building block of the system. The device offers a direct mapping to synapse and neuron functionalities in the brain while inter-layer network communication is accomplished via CMOS transistors. To the best of our knowledge, this is the first demonstration of a neural architecture where a single nanoelectronic device is able to mimic both neurons and synapses. The ultra-low voltage operation of low resistance magneto-metallic neurons enables the low-voltage operation of the array of spintronic synapses, thereby leading to ultra-low power neural architectures. Device-level simulations, calibrated to experimental results, was used to drive the circuit and system level simulations of the neural network for a standard pattern recognition problem. Simulation studies indicate energy savings by  ∼  100× in comparison to a corresponding digital/analog CMOS neuron implementation.

  5. Central neural pathways for thermoregulation

    Science.gov (United States)

    Morrison, Shaun F.; Nakamura, Kazuhiro

    2010-01-01

    Central neural circuits orchestrate a homeostatic repertoire to maintain body temperature during environmental temperature challenges and to alter body temperature during the inflammatory response. This review summarizes the functional organization of the neural pathways through which cutaneous thermal receptors alter thermoregulatory effectors: the cutaneous circulation for heat loss, the brown adipose tissue, skeletal muscle and heart for thermogenesis and species-dependent mechanisms (sweating, panting and saliva spreading) for evaporative heat loss. These effectors are regulated by parallel but distinct, effector-specific neural pathways that share a common peripheral thermal sensory input. The thermal afferent circuits include cutaneous thermal receptors, spinal dorsal horn neurons and lateral parabrachial nucleus neurons projecting to the preoptic area to influence warm-sensitive, inhibitory output neurons which control thermogenesis-promoting neurons in the dorsomedial hypothalamus that project to premotor neurons in the rostral ventromedial medulla, including the raphe pallidus, that descend to provide the excitation necessary to drive thermogenic thermal effectors. A distinct population of warm-sensitive preoptic neurons controls heat loss through an inhibitory input to raphe pallidus neurons controlling cutaneous vasoconstriction. PMID:21196160

  6. Neural Representations of Physics Concepts.

    Science.gov (United States)

    Mason, Robert A; Just, Marcel Adam

    2016-06-01

    We used functional MRI (fMRI) to assess neural representations of physics concepts (momentum, energy, etc.) in juniors, seniors, and graduate students majoring in physics or engineering. Our goal was to identify the underlying neural dimensions of these representations. Using factor analysis to reduce the number of dimensions of activation, we obtained four physics-related factors that were mapped to sets of voxels. The four factors were interpretable as causal motion visualization, periodicity, algebraic form, and energy flow. The individual concepts were identifiable from their fMRI signatures with a mean rank accuracy of .75 using a machine-learning (multivoxel) classifier. Furthermore, there was commonality in participants' neural representation of physics; a classifier trained on data from all but one participant identified the concepts in the left-out participant (mean accuracy = .71 across all nine participant samples). The findings indicate that abstract scientific concepts acquired in an educational setting evoke activation patterns that are identifiable and common, indicating that science education builds abstract knowledge using inherent, repurposed brain systems. © The Author(s) 2016.

  7. Racial bias in neural empathic responses to pain.

    Directory of Open Access Journals (Sweden)

    Luis Sebastian Contreras-Huerta

    Full Text Available Recent studies have shown that perceiving the pain of others activates brain regions in the observer associated with both somatosensory and affective-motivational aspects of pain, principally involving regions of the anterior cingulate and anterior insula cortex. The degree of these empathic neural responses is modulated by racial bias, such that stronger neural activation is elicited by observing pain in people of the same racial group compared with people of another racial group. The aim of the present study was to examine whether a more general social group category, other than race, could similarly modulate neural empathic responses and perhaps account for the apparent racial bias reported in previous studies. Using a minimal group paradigm, we assigned participants to one of two mixed-race teams. We use the term race to refer to the Chinese or Caucasian appearance of faces and whether the ethnic group represented was the same or different from the appearance of the participant' own face. Using fMRI, we measured neural empathic responses as participants observed members of their own group or other group, and members of their own race or other race, receiving either painful or non-painful touch. Participants showed clear group biases, with no significant effect of race, on behavioral measures of implicit (affective priming and explicit group identification. Neural responses to observed pain in the anterior cingulate cortex, insula cortex, and somatosensory areas showed significantly greater activation when observing pain in own-race compared with other-race individuals, with no significant effect of minimal groups. These results suggest that racial bias in neural empathic responses is not influenced by minimal forms of group categorization, despite the clear association participants showed with in-group more than out-group members. We suggest that race may be an automatic and unconscious mechanism that drives the initial neural responses to

  8. Racial Bias in Neural Empathic Responses to Pain

    Science.gov (United States)

    Contreras-Huerta, Luis Sebastian; Baker, Katharine S.; Reynolds, Katherine J.; Batalha, Luisa; Cunnington, Ross

    2013-01-01

    Recent studies have shown that perceiving the pain of others activates brain regions in the observer associated with both somatosensory and affective-motivational aspects of pain, principally involving regions of the anterior cingulate and anterior insula cortex. The degree of these empathic neural responses is modulated by racial bias, such that stronger neural activation is elicited by observing pain in people of the same racial group compared with people of another racial group. The aim of the present study was to examine whether a more general social group category, other than race, could similarly modulate neural empathic responses and perhaps account for the apparent racial bias reported in previous studies. Using a minimal group paradigm, we assigned participants to one of two mixed-race teams. We use the term race to refer to the Chinese or Caucasian appearance of faces and whether the ethnic group represented was the same or different from the appearance of the participant' own face. Using fMRI, we measured neural empathic responses as participants observed members of their own group or other group, and members of their own race or other race, receiving either painful or non-painful touch. Participants showed clear group biases, with no significant effect of race, on behavioral measures of implicit (affective priming) and explicit group identification. Neural responses to observed pain in the anterior cingulate cortex, insula cortex, and somatosensory areas showed significantly greater activation when observing pain in own-race compared with other-race individuals, with no significant effect of minimal groups. These results suggest that racial bias in neural empathic responses is not influenced by minimal forms of group categorization, despite the clear association participants showed with in-group more than out-group members. We suggest that race may be an automatic and unconscious mechanism that drives the initial neural responses to observed pain in

  9. What the success of brain imaging implies about the neural code.

    Science.gov (United States)

    Guest, Olivia; Love, Bradley C

    2017-01-19

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI's limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.

  10. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    Science.gov (United States)

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Multiple simultaneous fault diagnosis via hierarchical and single artificial neural networks

    International Nuclear Information System (INIS)

    Eslamloueyan, R.; Shahrokhi, M.; Bozorgmehri, R.

    2003-01-01

    Process fault diagnosis involves interpreting the current status of the plant given sensor reading and process knowledge. There has been considerable work done in this area with a variety of approaches being proposed for process fault diagnosis. Neural networks have been used to solve process fault diagnosis problems in chemical process, as they are well suited for recognizing multi-dimensional nonlinear patterns. In this work, the use of Hierarchical Artificial Neural Networks in diagnosing the multi-faults of a chemical process are discussed and compared with that of Single Artificial Neural Networks. The lower efficiency of Hierarchical Artificial Neural Networks , in comparison to Single Artificial Neural Networks, in process fault diagnosis is elaborated and analyzed. Also, the concept of a multi-level selection switch is presented and developed to improve the performance of hierarchical artificial neural networks. Simulation results indicate that application of multi-level selection switch increase the performance of the hierarchical artificial neural networks considerably

  12. Effect of ionizing radiation on the differentiation of neural stem cells

    International Nuclear Information System (INIS)

    Liu Ping; Tu Yu

    2010-01-01

    In order to investigate the effect of ionizing radiation on neural stem cells differentiation, we cultured neural stem cells of newborn rat in serum-free media containing EGF or bFGF. The neural stem cells were divided into 4 groups, which were irradiated by γ-rays with doses of 0, 0.5, 1, and 2 Gy. The irradiated cells were cultured under the same condition for 7 days, and the nestin content of neural stem cell was detected by immunofluorescence. The same method was carried out with irradiated cells in the culture medium after removing EGF, bFGF for 7 days, NSE and GFAP expression content and nestin were also detected by immunofluorescence. It has been found that the irradiated neural stem cells can express less nestin and differentiate more neurons compared to that of control group. Results show that ionizing radiation can induce the differentiation of the neural stem cells and make the neural stem cells differentiate more neuron. (authors)

  13. Stability of Neutral Fractional Neural Networks with Delay

    Institute of Scientific and Technical Information of China (English)

    LI Yan; JIANG Wei; HU Bei-bei

    2016-01-01

    This paper studies stability of neutral fractional neural networks with delay. By introducing the definition of norm and using the uniform stability, the sufficient condition for uniform stability of neutral fractional neural networks with delay is obtained.

  14. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  15. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  16. Isolation and culture of neural crest cells from embryonic murine neural tube.

    Science.gov (United States)

    Pfaltzgraff, Elise R; Mundell, Nathan A; Labosky, Patricia A

    2012-06-02

    The embryonic neural crest (NC) is a multipotent progenitor population that originates at the dorsal aspect of the neural tube, undergoes an epithelial to mesenchymal transition (EMT) and migrates throughout the embryo, giving rise to diverse cell types. NC also has the unique ability to influence the differentiation and maturation of target organs. When explanted in vitro, NC progenitors undergo self-renewal, migrate and differentiate into a variety of tissue types including neurons, glia, smooth muscle cells, cartilage and bone. NC multipotency was first described from explants of the avian neural tube. In vitro isolation of NC cells facilitates the study of NC dynamics including proliferation, migration, and multipotency. Further work in the avian and rat systems demonstrated that explanted NC cells retain their NC potential when transplanted back into the embryo. Because these inherent cellular properties are preserved in explanted NC progenitors, the neural tube explant assay provides an attractive option for studying the NC in vitro. To attain a better understanding of the mammalian NC, many methods have been employed to isolate NC populations. NC-derived progenitors can be cultured from post-migratory locations in both the embryo and adult to study the dynamics of post-migratory NC progenitors, however isolation of NC progenitors as they emigrate from the neural tube provides optimal preservation of NC cell potential and migratory properties. Some protocols employ fluorescence activated cell sorting (FACS) to isolate a NC population enriched for particular progenitors. However, when starting with early stage embryos, cell numbers adequate for analyses are difficult to obtain with FACS, complicating the isolation of early NC populations from individual embryos. Here, we describe an approach that does not rely on FACS and results in an approximately 96% pure NC population based on a Wnt1-Cre activated lineage reporter. The method presented here is adapted from

  17. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  18. Using neural networks in software repositories

    Science.gov (United States)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  19. Advanced models of neural networks nonlinear dynamics and stochasticity in biological neurons

    CERN Document Server

    Rigatos, Gerasimos G

    2015-01-01

    This book provides a complete study on neural structures exhibiting nonlinear and stochastic dynamics, elaborating on neural dynamics by introducing advanced models of neural networks. It overviews the main findings in the modelling of neural dynamics in terms of electrical circuits and examines their stability properties with the use of dynamical systems theory. It is suitable for researchers and postgraduate students engaged with neural networks and dynamical systems theory.

  20. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  1. Neural growth into a microchannel network: towards a regenerative neural interface

    NARCIS (Netherlands)

    Wieringa, P.A.; Wiertz, Remy; le Feber, Jakob; Rutten, Wim

    2009-01-01

    We propose and validated a design for a highly selective 'endcap' regenerative neural interface towards a neuroprosthesis. In vitro studies using rat cortical neurons determine if a branching microchannel structure can counter fasciculated growth and cause neurites to separte from one another,

  2. A one-layer recurrent neural network for constrained nonsmooth optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-10-01

    This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.

  3. Review: the role of neural crest cells in the endocrine system.

    Science.gov (United States)

    Adams, Meghan Sara; Bronner-Fraser, Marianne

    2009-01-01

    The neural crest is a pluripotent population of cells that arises at the junction of the neural tube and the dorsal ectoderm. These highly migratory cells form diverse derivatives including neurons and glia of the sensory, sympathetic, and enteric nervous systems, melanocytes, and the bones, cartilage, and connective tissues of the face. The neural crest has long been associated with the endocrine system, although not always correctly. According to current understanding, neural crest cells give rise to the chromaffin cells of the adrenal medulla, chief cells of the extra-adrenal paraganglia, and thyroid C cells. The endocrine tumors that correspond to these cell types are pheochromocytomas, extra-adrenal paragangliomas, and medullary thyroid carcinomas. Although controversies concerning embryological origin appear to have mostly been resolved, questions persist concerning the pathobiology of each tumor type and its basis in neural crest embryology. Here we present a brief history of the work on neural crest development, both in general and in application to the endocrine system. In particular, we present findings related to the plasticity and pluripotency of neural crest cells as well as a discussion of several different neural crest tumors in the endocrine system.

  4. Cracking the Neural Code for Sensory Perception by Combining Statistics, Intervention, and Behavior.

    Science.gov (United States)

    Panzeri, Stefano; Harvey, Christopher D; Piasini, Eugenio; Latham, Peter E; Fellin, Tommaso

    2017-02-08

    The two basic processes underlying perceptual decisions-how neural responses encode stimuli, and how they inform behavioral choices-have mainly been studied separately. Thus, although many spatiotemporal features of neural population activity, or "neural codes," have been shown to carry sensory information, it is often unknown whether the brain uses these features for perception. To address this issue, we propose a new framework centered on redefining the neural code as the neural features that carry sensory information used by the animal to drive appropriate behavior; that is, the features that have an intersection between sensory and choice information. We show how this framework leads to a new statistical analysis of neural activity recorded during behavior that can identify such neural codes, and we discuss how to combine intersection-based analysis of neural recordings with intervention on neural activity to determine definitively whether specific neural activity features are involved in a task. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Feedforward Nonlinear Control Using Neural Gas Network

    Directory of Open Access Journals (Sweden)

    Iván Machón-González

    2017-01-01

    Full Text Available Nonlinear systems control is a main issue in control theory. Many developed applications suffer from a mathematical foundation not as general as the theory of linear systems. This paper proposes a control strategy of nonlinear systems with unknown dynamics by means of a set of local linear models obtained by a supervised neural gas network. The proposed approach takes advantage of the neural gas feature by which the algorithm yields a very robust clustering procedure. The direct model of the plant constitutes a piece-wise linear approximation of the nonlinear system and each neuron represents a local linear model for which a linear controller is designed. The neural gas model works as an observer and a controller at the same time. A state feedback control is implemented by estimation of the state variables based on the local transfer function that was provided by the local linear model. The gradient vectors obtained by the supervised neural gas algorithm provide a robust procedure for feedforward nonlinear control, that is, supposing the inexistence of disturbances.

  6. Foreign currency rate forecasting using neural networks

    Science.gov (United States)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  7. Image Encryption and Chaotic Cellular Neural Network

    Science.gov (United States)

    Peng, Jun; Zhang, Du

    Machine learning has been playing an increasingly important role in information security and assurance. One of the areas of new applications is to design cryptographic systems by using chaotic neural network due to the fact that chaotic systems have several appealing features for information security applications. In this chapter, we describe a novel image encryption algorithm that is based on a chaotic cellular neural network. We start by giving an introduction to the concept of image encryption and its main technologies, and an overview of the chaotic cellular neural network. We then discuss the proposed image encryption algorithm in details, which is followed by a number of security analyses (key space analysis, sensitivity analysis, information entropy analysis and statistical analysis). The comparison with the most recently reported chaos-based image encryption algorithms indicates that the algorithm proposed in this chapter has a better security performance. Finally, we conclude the chapter with possible future work and application prospects of the chaotic cellular neural network in other information assurance and security areas.

  8. Neural networks to predict exosphere temperature corrections

    Science.gov (United States)

    Choury, Anna; Bruinsma, Sean; Schaeffer, Philippe

    2013-10-01

    Precise orbit prediction requires a forecast of the atmospheric drag force with a high degree of accuracy. Artificial neural networks are universal approximators derived from artificial intelligence and are widely used for prediction. This paper presents a method of artificial neural networking for prediction of the thermosphere density by forecasting exospheric temperature, which will be used by the semiempirical thermosphere Drag Temperature Model (DTM) currently developed. Artificial neural network has shown to be an effective and robust forecasting model for temperature prediction. The proposed model can be used for any mission from which temperature can be deduced accurately, i.e., it does not require specific training. Although the primary goal of the study was to create a model for 1 day ahead forecast, the proposed architecture has been generalized to 2 and 3 days prediction as well. The impact of artificial neural network predictions has been quantified for the low-orbiting satellite Gravity Field and Steady-State Ocean Circulation Explorer in 2011, and an order of magnitude smaller orbit errors were found when compared with orbits propagated using the thermosphere model DTM2009.

  9. Integrating neural network technology and noise analysis

    International Nuclear Information System (INIS)

    Uhrig, R.E.; Oak Ridge National Lab., TN

    1995-01-01

    The integrated use of neural network and noise analysis technologies offers advantages not available by the use of either technology alone. The application of neural network technology to noise analysis offers an opportunity to expand the scope of problems where noise analysis is useful and unique ways in which the integration of these technologies can be used productively. The two-sensor technique, in which the responses of two sensors to an unknown driving source are related, is used to demonstration such integration. The relationship between power spectral densities (PSDs) of accelerometer signals is derived theoretically using noise analysis to demonstrate its uniqueness. This relationship is modeled from experimental data using a neural network when the system is working properly, and the actual PSD of one sensor is compared with the PSD of that sensor predicted by the neural network using the PSD of the other sensor as an input. A significant deviation between the actual and predicted PSDs indicate that system is changing (i.e., failing). Experiments carried out on check values and bearings illustrate the usefulness of the methodology developed. (Author)

  10. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  11. Improving Neural Recording Technology at the Nanoscale

    Science.gov (United States)

    Ferguson, John Eric

    Neural recording electrodes are widely used to study normal brain function (e.g., learning, memory, and sensation) and abnormal brain function (e.g., epilepsy, addiction, and depression) and to interface with the nervous system for neuroprosthetics. With a deep understanding of the electrode interface at the nanoscale and the use of novel nanofabrication processes, neural recording electrodes can be designed that surpass previous limits and enable new applications. In this thesis, I will discuss three projects. In the first project, we created an ultralow-impedance electrode coating by controlling the nanoscale texture of electrode surfaces. In the second project, we developed a novel nanowire electrode for long-term intracellular recordings. In the third project, we created a means of wirelessly communicating with ultra-miniature, implantable neural recording devices. The techniques developed for these projects offer significant improvements in the quality of neural recordings. They can also open the door to new types of experiments and medical devices, which can lead to a better understanding of the brain and can enable novel and improved tools for clinical applications.

  12. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  13. Synaptic E-I Balance Underlies Efficient Neural Coding.

    Science.gov (United States)

    Zhou, Shanglin; Yu, Yuguo

    2018-01-01

    Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.

  14. Combining neural networks for protein secondary structure prediction

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1995-01-01

    In this paper structured neural networks are applied to the problem of predicting the secondary structure of proteins. A hierarchical approach is used where specialized neural networks are designed for each structural class and then combined using another neural network. The submodels are designed...... by using a priori knowledge of the mapping between protein building blocks and the secondary structure and by using weight sharing. Since none of the individual networks have more than 600 adjustable weights over-fitting is avoided. When ensembles of specialized experts are combined the performance...

  15. Discrete-time BAM neural networks with variable delays

    Science.gov (United States)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  16. Discrete-time BAM neural networks with variable delays

    International Nuclear Information System (INIS)

    Liu Xinge; Tang Meilan; Martin, Ralph; Liu Xinbi

    2007-01-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development

  17. Neural Generalized Predictive Control of a non-linear Process

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1998-01-01

    The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability qu...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem.......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial...

  18. Artificial neural network based approach to transmission lines protection

    International Nuclear Information System (INIS)

    Joorabian, M.

    1999-05-01

    The aim of this paper is to present and accurate fault detection technique for high speed distance protection using artificial neural networks. The feed-forward multi-layer neural network with the use of supervised learning and the common training rule of error back-propagation is chosen for this study. Information available locally at the relay point is passed to a neural network in order for an assessment of the fault location to be made. However in practice there is a large amount of information available, and a feature extraction process is required to reduce the dimensionality of the pattern vectors, whilst retaining important information that distinguishes the fault point. The choice of features is critical to the performance of the neural networks learning and operation. A significant feature in this paper is that an artificial neural network has been designed and tested to enhance the precision of the adaptive capabilities for distance protection

  19. Xenopus reduced folate carrier regulates neural crest development epigenetically.

    Directory of Open Access Journals (Sweden)

    Jiejing Li

    Full Text Available Folic acid deficiency during pregnancy causes birth neurocristopathic malformations resulting from aberrant development of neural crest cells. The Reduced folate carrier (RFC is a membrane-bound receptor for facilitating transfer of reduced folate into the cells. RFC knockout mice are embryonic lethal and develop multiple malformations, including neurocristopathies. Here we show that XRFC is specifically expressed in neural crest tissues in Xenopus embryos and knockdown of XRFC by specific morpholino results in severe neurocristopathies. Inhibition of RFC blocked the expression of a series of neural crest marker genes while overexpression of RFC or injection of 5-methyltetrahydrofolate expanded the neural crest territories. In animal cap assays, knockdown of RFC dramatically reduced the mono- and trimethyl-Histone3-K4 levels and co-injection of the lysine methyltransferase hMLL1 largely rescued the XRFC morpholino phenotype. Our data revealed that the RFC mediated folate metabolic pathway likely potentiates neural crest gene expression through epigenetic modifications.

  20. Open quantum generalisation of Hopfield neural networks

    Science.gov (United States)

    Rotondo, P.; Marcuzzi, M.; Garrahan, J. P.; Lesanovsky, I.; Müller, M.

    2018-03-01

    We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.

  1. Electrospun Nanofibrous Materials for Neural Tissue Engineering

    Directory of Open Access Journals (Sweden)

    Yee-Shuan Lee

    2011-02-01

    Full Text Available The use of biomaterials processed by the electrospinning technique has gained considerable interest for neural tissue engineering applications. The tissue engineering strategy is to facilitate the regrowth of nerves by combining an appropriate cell type with the electrospun scaffold. Electrospinning can generate fibrous meshes having fiber diameter dimensions at the nanoscale and these fibers can be nonwoven or oriented to facilitate neurite extension via contact guidance. This article reviews studies evaluating the effect of the scaffold’s architectural features such as fiber diameter and orientation on neural cell function and neurite extension. Electrospun meshes made of natural polymers, proteins and compositions having electrical activity in order to enhance neural cell function are also discussed.

  2. Stock market index prediction using neural networks

    Science.gov (United States)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  3. Habituation in non-neural organisms: evidence from slime moulds

    OpenAIRE

    Boisseau, Romain P.; Vogel, David; Dussutour, Audrey

    2016-01-01

    Learning, defined as a change in behaviour evoked by experience, has hitherto been investigated almost exclusively in multicellular neural organisms. Evidence for learning in non-neural multicellular organisms is scant, and only a few unequivocal reports of learning have been described in single-celled organisms. Here we demonstrate habituation, an unmistakable form of learning, in the non-neural organism Physarum polycephalum. In our experiment, using chemotaxis as the behavioural output and...

  4. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    Science.gov (United States)

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  5. Neural redundancy applied to the parity space for signal validation

    International Nuclear Information System (INIS)

    Mol, Antonio Carlos de Abreu; Pereira, Claudio Marcio Nascimento Abreu; Martinez, Aquilino Senra

    2005-01-01

    The objective of signal validation is to provide more reliable information from the plant sensor data The method presented in this work introduces the concept of neural redundancy and applies it to the space parity method [1] to overcome an inherent deficiency of this method - the determination of the best estimative of the redundant measures when they are inconsistent. The concept of neural redundancy consists on the calculation of a redundancy through neural networks based on the time series of the own state variable. Therefore, neural networks, dynamically trained with the time series, will estimate the current value of the own measure, which will be used as referee of the redundant measures in the parity space. For this purpose the neural network should have the capacity to supply the neural redundancy in real time and with maximum error corresponding to the group deviation. The historical series should be enough to allow the estimate of the next value, during transients and at the same time, it should be optimized to facilitate the retraining of the neural network to each acquisition. In order to have the capacity to reproduce the tendency of the time series even under accident condition, the dynamic training of the neural network privileges the recent points of the time series. The tests accomplished with simulated data of a nuclear plant, demonstrated that this method applied on the parity space method improves the signal validation process. (author)

  6. Neural redundancy applied to the parity space for signal validation

    Energy Technology Data Exchange (ETDEWEB)

    Mol, Antonio Carlos de Abreu; Pereira, Claudio Marcio Nascimento Abreu [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)]. E-mail: cmnap@ien.gov.br; Martinez, Aquilino Senra [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia]. E-mail: aquilino@lmp.br

    2005-07-01

    The objective of signal validation is to provide more reliable information from the plant sensor data The method presented in this work introduces the concept of neural redundancy and applies it to the space parity method [1] to overcome an inherent deficiency of this method - the determination of the best estimative of the redundant measures when they are inconsistent. The concept of neural redundancy consists on the calculation of a redundancy through neural networks based on the time series of the own state variable. Therefore, neural networks, dynamically trained with the time series, will estimate the current value of the own measure, which will be used as referee of the redundant measures in the parity space. For this purpose the neural network should have the capacity to supply the neural redundancy in real time and with maximum error corresponding to the group deviation. The historical series should be enough to allow the estimate of the next value, during transients and at the same time, it should be optimized to facilitate the retraining of the neural network to each acquisition. In order to have the capacity to reproduce the tendency of the time series even under accident condition, the dynamic training of the neural network privileges the recent points of the time series. The tests accomplished with simulated data of a nuclear plant, demonstrated that this method applied on the parity space method improves the signal validation process. (author)

  7. Conserved gene regulatory module specifies lateral neural borders across bilaterians.

    Science.gov (United States)

    Li, Yongbin; Zhao, Di; Horie, Takeo; Chen, Geng; Bao, Hongcun; Chen, Siyu; Liu, Weihong; Horie, Ryoko; Liang, Tao; Dong, Biyu; Feng, Qianqian; Tao, Qinghua; Liu, Xiao

    2017-08-01

    The lateral neural plate border (NPB), the neural part of the vertebrate neural border, is composed of central nervous system (CNS) progenitors and peripheral nervous system (PNS) progenitors. In invertebrates, PNS progenitors are also juxtaposed to the lateral boundary of the CNS. Whether there are conserved molecular mechanisms determining vertebrate and invertebrate lateral neural borders remains unclear. Using single-cell-resolution gene-expression profiling and genetic analysis, we present evidence that orthologs of the NPB specification module specify the invertebrate lateral neural border, which is composed of CNS and PNS progenitors. First, like in vertebrates, the conserved neuroectoderm lateral border specifier Msx/vab-15 specifies lateral neuroblasts in Caenorhabditis elegans Second, orthologs of the vertebrate NPB specification module ( Msx/vab-15 , Pax3/7/pax-3 , and Zic/ref-2 ) are significantly enriched in worm lateral neuroblasts. In addition, like in other bilaterians, the expression domain of Msx/vab-15 is more lateral than those of Pax3/7/pax-3 and Zic/ref- 2 in C. elegans Third, we show that Msx/vab-15 regulates the development of mechanosensory neurons derived from lateral neural progenitors in multiple invertebrate species, including C. elegans , Drosophila melanogaster , and Ciona intestinalis We also identify a novel lateral neural border specifier, ZNF703/tlp-1 , which functions synergistically with Msx/vab- 15 in both C. elegans and Xenopus laevis These data suggest a common origin of the molecular mechanism specifying lateral neural borders across bilaterians.

  8. Interpretations of Frequency Domain Analyses of Neural Entrainment: Periodicity, Fundamental Frequency, and Harmonics.

    Science.gov (United States)

    Zhou, Hong; Melloni, Lucia; Poeppel, David; Ding, Nai

    2016-01-01

    Brain activity can follow the rhythms of dynamic sensory stimuli, such as speech and music, a phenomenon called neural entrainment. It has been hypothesized that low-frequency neural entrainment in the neural delta and theta bands provides a potential mechanism to represent and integrate temporal information. Low-frequency neural entrainment is often studied using periodically changing stimuli and is analyzed in the frequency domain using the Fourier analysis. The Fourier analysis decomposes a periodic signal into harmonically related sinusoids. However, it is not intuitive how these harmonically related components are related to the response waveform. Here, we explain the interpretation of response harmonics, with a special focus on very low-frequency neural entrainment near 1 Hz. It is illustrated why neural responses repeating at f Hz do not necessarily generate any neural response at f Hz in the Fourier spectrum. A strong neural response at f Hz indicates that the time scales of the neural response waveform within each cycle match the time scales of the stimulus rhythm. Therefore, neural entrainment at very low frequency implies not only that the neural response repeats at f Hz but also that each period of the neural response is a slow wave matching the time scale of a f Hz sinusoid.

  9. Development of teeth in chick embryos after mouse neural crest transplantations.

    Science.gov (United States)

    Mitsiadis, Thimios A; Chéraud, Yvonnick; Sharpe, Paul; Fontaine-Pérus, Josiane

    2003-05-27

    Teeth were lost in birds 70-80 million years ago. Current thinking holds that it is the avian cranial neural crest-derived mesenchyme that has lost odontogenic capacity, whereas the oral epithelium retains the signaling properties required to induce odontogenesis. To investigate the odontogenic capacity of ectomesenchyme, we have used neural tube transplantations from mice to chick embryos to replace the chick neural crest cell populations with mouse neural crest cells. The mouse/chick chimeras obtained show evidence of tooth formation showing that avian oral epithelium is able to induce a nonavian developmental program in mouse neural crest-derived mesenchymal cells.

  10. Control of beam halo-chaos using neural network self-adaptation method

    International Nuclear Information System (INIS)

    Fang Jinqing; Huang Guoxian; Luo Xiaoshu

    2004-11-01

    Taking the advantages of neural network control method for nonlinear complex systems, control of beam halo-chaos in the periodic focusing channels (network) of high intensity accelerators is studied by feed-forward back-propagating neural network self-adaptation method. The envelope radius of high-intensity proton beam is reached to the matching beam radius by suitably selecting the control structure of neural network and the linear feedback coefficient, adjusted the right-coefficient of neural network. The beam halo-chaos is obviously suppressed and shaking size is much largely reduced after the neural network self-adaptation control is applied. (authors)

  11. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  12. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  13. Utility of Phox2b immunohistochemical stain in neural crest tumours and non-neural crest tumours in paediatric patients.

    Science.gov (United States)

    Warren, Mikako; Matsuno, Ryosuke; Tran, Henry; Shimada, Hiroyuki

    2018-03-01

    This study evaluated the utility of Phox2b in paediatric tumours. Previously, tyrosine hydroxylase (TH) was the most widely utilised sympathoadrenal marker specific for neural crest tumours with neuronal/neuroendocrine differentiation. However, its sensitivity is insufficient. Recently Phox2b has emerged as another specific marker for this entity. Phox2b immunohistochemistry (IHC) was performed on 159 paediatric tumours, including (group 1) 65 neural crest tumours with neuronal differentiation [peripheral neuroblastic tumours (pNT)]: 15 neuroblastoma undifferentiated (NB-UD), 10 NB poorly differentiated (NB-PD), 10 NB differentiating (NB-D), 10 ganglioneuroblastoma intermixed (GNBi), 10 GNB nodular (GNBn) and 10 ganglioneuroma (GN); (group 2) 23 neural crest tumours with neuroendocrine differentiation [pheochromocytoma/paraganglioma (PCC/PG)]; (group 3) 27 other neural crest tumours including one composite rhabdomyosarcoma/neuroblastoma; and (group 4) 44 non-neural crest tumours. TH IHC was performed on groups 1, 2 and 3. Phox2b was expressed diffusely in pNT (n = 65 of 65), strongly in NB-UD and NB-PD and with less intensity in NB-D, GNB and GN. Diffuse TH was seen in all NB-PD, NB-D, GNB and GN, but nine of 15 NB-UD and a nodule in GNBn did not express TH (n = 55 of 65). PCC/PG expressed diffuse Phox2b (n = 23 of 23) and diffuse TH, except for one tumour (n = 22 of 23). In composite rhabdomyosarcoma, TH was expressed only in neuroblastic cells and Phox2b was diffusely positive in neuroblastic cells and focally in rhabdomyosarcoma. All other tumours were negative for Phox2b (n = none of 44). Phox2b was a specific and sensitive marker for pNT and PCC/PG, especially useful for identifying NB-UD often lacking TH. Our study also presented a composite rhabdomyosarcoma/neuroblastoma of neural crest origin. © 2017 John Wiley & Sons Ltd.

  14. Potential applications of neural networks to nuclear power plants

    International Nuclear Information System (INIS)

    Uhrig, R.E.

    1991-01-01

    Application of neural networks to the operation of nuclear power plants is being investigated under a US Department of Energy sponsored program at the University of Tennessee. Projects include the feasibility of using neural networks for the following tasks: diagnosing specific abnormal conditions, detection of the change of mode of operation, signal validation, monitoring of check valves, plant-wide monitoring using autoassociative neural networks, modeling of the plant thermodynamics, emulation of core reload calculations, monitoring of plant parameters, and analysis of plant vibrations. Each of these projects and its status are described briefly in this article. The objective of each of these projects is to enhance the safety and performance of nuclear plants through the use of neural networks

  15. Neural network models: from biology to many - body phenomenology

    International Nuclear Information System (INIS)

    Clark, J.W.

    1993-01-01

    The current surge of research on practical side of neural networks and their utility in memory storage/recall, pattern recognition and classification is given in this article. The initial attraction of neural networks as dynamical and statistical system has been investigated. From the view of many-body theorist, the neurons may be thought of as particles, and the weighted connection between the units, as the interaction between these particles. Finally, the author has seen the impressive capabilities of artificial neural networks in pattern recognition and classification may be exploited to solve data management problems in experimental physics and the discovery of radically new theoretically description of physical problems and neural networks can be used in physics. (A.B.)

  16. Neural network for solving convex quadratic bilevel programming problems.

    Science.gov (United States)

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Improvement of the Hopfield Neural Network by MC-Adaptation Rule

    Science.gov (United States)

    Zhou, Zhen; Zhao, Hong

    2006-06-01

    We show that the performance of the Hopfield neural networks, especially the quality of the recall and the capacity of the effective storing, can be greatly improved by making use of a recently presented neural network designing method without altering the whole structure of the network. In the improved neural network, a memory pattern is recalled exactly from initial states having a given degree of similarity with the memory pattern, and thus one can avoids to apply the overlap criterion as carried out in the Hopfield neural networks.

  18. Roles of neural stem cells in the repair of peripheral nerve injury.

    Science.gov (United States)

    Wang, Chong; Lu, Chang-Feng; Peng, Jiang; Hu, Cheng-Dong; Wang, Yu

    2017-12-01

    Currently, researchers are using neural stem cell transplantation to promote regeneration after peripheral nerve injury, as neural stem cells play an important role in peripheral nerve injury repair. This article reviews recent research progress of the role of neural stem cells in the repair of peripheral nerve injury. Neural stem cells can not only differentiate into neurons, astrocytes and oligodendrocytes, but can also differentiate into Schwann-like cells, which promote neurite outgrowth around the injury. Transplanted neural stem cells can differentiate into motor neurons that innervate muscles and promote the recovery of neurological function. To promote the repair of peripheral nerve injury, neural stem cells secrete various neurotrophic factors, including brain-derived neurotrophic factor, fibroblast growth factor, nerve growth factor, insulin-like growth factor and hepatocyte growth factor. In addition, neural stem cells also promote regeneration of the axonal myelin sheath, angiogenesis, and immune regulation. It can be concluded that neural stem cells promote the repair of peripheral nerve injury through a variety of ways.

  19. Função cócleo-vestibular após hemisferectomias cerebrais: apresentação de dois casos

    Directory of Open Access Journals (Sweden)

    Sérgio Paula Santos

    1957-03-01

    Full Text Available O exame cócleo-vestibular de dois pacientes hemisferectomizados mostrou: hipoacusia bilateral não ultrapassando 30 db, sendo maior no ouvido contralateral; não houve recrutamento de volume; desproporção entre audiometria vocal e tonal; a função vestibular não se mostrou prejudicada pela prova calórica fria.

  20. PWR system simulation and parameter estimation with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

    2002-11-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

  1. PWR system simulation and parameter estimation with neural networks

    International Nuclear Information System (INIS)

    Akkurt, Hatice; Colak, Uener

    2002-01-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

  2. High-Density Stretchable Electrode Grids for Chronic Neural Recording.

    Science.gov (United States)

    Tybrandt, Klas; Khodagholy, Dion; Dielacher, Bernd; Stauffer, Flurin; Renz, Aline F; Buzsáki, György; Vörös, János

    2018-04-01

    Electrical interfacing with neural tissue is key to advancing diagnosis and therapies for neurological disorders, as well as providing detailed information about neural signals. A challenge for creating long-term stable interfaces between electronics and neural tissue is the huge mechanical mismatch between the systems. So far, materials and fabrication processes have restricted the development of soft electrode grids able to combine high performance, long-term stability, and high electrode density, aspects all essential for neural interfacing. Here, this challenge is addressed by developing a soft, high-density, stretchable electrode grid based on an inert, high-performance composite material comprising gold-coated titanium dioxide nanowires embedded in a silicone matrix. The developed grid can resolve high spatiotemporal neural signals from the surface of the cortex in freely moving rats with stable neural recording quality and preserved electrode signal coherence during 3 months of implantation. Due to its flexible and stretchable nature, it is possible to minimize the size of the craniotomy required for placement, further reducing the level of invasiveness. The material and device technology presented herein have potential for a wide range of emerging biomedical applications. © 2018 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Robo signaling regulates the production of cranial neural crest cells.

    Science.gov (United States)

    Li, Yan; Zhang, Xiao-Tan; Wang, Xiao-Yu; Wang, Guang; Chuai, Manli; Münsterberg, Andrea; Yang, Xuesong

    2017-12-01

    Slit/Robo signaling plays an important role in the guidance of developing neurons in developing embryos. However, it remains obscure whether and how Slit/Robo signaling is involved in the production of cranial neural crest cells. In this study, we examined Robo1 deficient mice to reveal developmental defects of mouse cranial frontal and parietal bones, which are derivatives of cranial neural crest cells. Therefore, we determined the production of HNK1 + cranial neural crest cells in early chick embryo development after knock-down (KD) of Robo1 expression. Detection of markers for pre-migratory and migratory neural crest cells, PAX7 and AP-2α, showed that production of both was affected by Robo1 KD. In addition, we found that the transcription factor slug is responsible for the aberrant delamination/EMT of cranial neural crest cells induced by Robo1 KD, which also led to elevated expression of E- and N-Cadherin. N-Cadherin expression was enhanced when blocking FGF signaling with dominant-negative FGFR1 in half of the neural tube. Taken together, we show that Slit/Robo signaling influences the delamination/EMT of cranial neural crest cells, which is required for cranial bone development. Copyright © 2017. Published by Elsevier Inc.

  4. Application of radial basis neural network for state estimation of ...

    African Journals Online (AJOL)

    An original application of radial basis function (RBF) neural network for power system state estimation is proposed in this paper. The property of massive parallelism of neural networks is employed for this. The application of RBF neural network for state estimation is investigated by testing its applicability on a IEEE 14 bus ...

  5. Weather forecasting based on hybrid neural model

    Science.gov (United States)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-11-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  6. Efficient Cancer Detection Using Multiple Neural Networks.

    Science.gov (United States)

    Shell, John; Gregory, William D

    2017-01-01

    The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings.

  7. Neural Network Based Model of an Industrial Oil-Fired Boiler System ...

    African Journals Online (AJOL)

    A two-layer feed-forward neural network with Hyperbolic tangent sigmoid ... The neural network model when subjected to test, using the validation input data; ... Proportional Integral Derivative (PID) Controller is used to control the neural ...

  8. Nonlinear programming with feedforward neural networks.

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  9. Neural codes of seeing architectural styles

    OpenAIRE

    Choo, Heeyoung; Nasar, Jack L.; Nikrahei, Bardia; Walther, Dirk B.

    2017-01-01

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people′s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding sugges...

  10. The Molecular Basis of Neural Memory. Part 7: Neural Intelligence (NI versus Artificial Intelligence (AI

    Directory of Open Access Journals (Sweden)

    Gerard Marx

    2017-07-01

    Full Text Available The link of memory to intelligence is incontestable, though the development of electronic artifacts with memory has confounded cognitive and computer scientists’ conception of memory and its relevance to “intelligence”. We propose two categories of “Intelligence”: (1 Logical (objective — mathematics, numbers, pattern recognition, games, programmable in binary format. (2 Emotive (subjective — sensations, feelings, perceptions, goals desires, sociability, sex, food, love. The 1st has been reduced to computational algorithms of which we are well versed, witness global technology and the internet. The 2nd relates to the mysterious process whereby (psychic emotive states are achieved by neural beings sensing, comprehending, remembering and dealing with their surroundings. Many theories and philosophies have been forwarded to rationalize this process, but as neuroscientists, we remain dissatisfied. Our own musings on universal neural memory, suggest a tripartite mechanism involving neurons interacting with their surroundings, notably the neural extracellular matrix (nECM with dopants [trace metals and neurotransmitters (NTs]. In particular, the NTs are the molecular encoders of emotive states. We have developed a chemographic representation of such a molecular code.To quote Longuet-Higgins, “Perhaps it is time for the term ‘artificial intelligence’ to be replaced by something more modest and less provisional”. We suggest “artifact intelligence” (ARTI or “machine intelligence” (MI, neither of which imply emulation of emotive neural processes, but simply refer to the ‘demotive’ (lacking emotive quality capability of electronic artifacts that employ a recall function, to calculate algorithms.

  11. Normalization as a canonical neural computation

    Science.gov (United States)

    Carandini, Matteo; Heeger, David J.

    2012-01-01

    There is increasing evidence that the brain relies on a set of canonical neural computations, repeating them across brain regions and modalities to apply similar operations to different problems. A promising candidate for such a computation is normalization, in which the responses of neurons are divided by a common factor that typically includes the summed activity of a pool of neurons. Normalization was developed to explain responses in the primary visual cortex and is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions. Normalization may underlie operations such as the representation of odours, the modulatory effects of visual attention, the encoding of value and the integration of multisensory information. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that it serves as a canonical neural computation. PMID:22108672

  12. A novel neural-wavelet approach for process diagnostics and complex system modeling

    Science.gov (United States)

    Gao, Rong

    Neural networks have been effective in several engineering applications because of their learning abilities and robustness. However certain shortcomings, such as slow convergence and local minima, are always associated with neural networks, especially neural networks applied to highly nonlinear and non-stationary problems. These problems can be effectively alleviated by integrating a new powerful tool, wavelets, into conventional neural networks. The multi-resolution analysis and feature localization capabilities of the wavelet transform offer neural networks new possibilities for learning. A neural wavelet network approach developed in this thesis enjoys fast convergence rate with little possibility to be caught at a local minimum. It combines the localization properties of wavelets with the learning abilities of neural networks. Two different testbeds are used for testing the efficiency of the new approach. The first is magnetic flowmeter-based process diagnostics: here we extend previous work, which has demonstrated that wavelet groups contain process information, to more general process diagnostics. A loop at Applied Intelligent Systems Lab (AISL) is used for collecting and analyzing data through the neural-wavelet approach. The research is important for thermal-hydraulic processes in nuclear and other engineering fields. The neural-wavelet approach developed is also tested with data from the electric power grid. More specifically, the neural-wavelet approach is used for performing short-term and mid-term prediction of power load demand. In addition, the feasibility of determining the type of load using the proposed neural wavelet approach is also examined. The notion of cross scale product has been developed as an expedient yet reliable discriminator of loads. Theoretical issues involved in the integration of wavelets and neural networks are discussed and future work outlined.

  13. A comparative study of two neural networks for document retrieval

    International Nuclear Information System (INIS)

    Hui, S.C.; Goh, A.

    1997-01-01

    In recent years there has been specific interest in adopting advanced computer techniques in the field of document retrieval. This interest is generated by the fact that classical methods such as the Boolean search, the vector space model or even probabilistic retrieval cannot handle the increasing demands of end-users in satisfying their needs. The most recent attempt is the application of the neural network paradigm as a means of providing end-users with a more powerful retrieval mechanism. Neural networks are not only good pattern matchers but also highly versatile and adaptable. In this paper, we demonstrate how to apply two neural networks, namely Adaptive Resonance Theory and Fuzzy Kohonen Neural Network, for document retrieval. In addition, a comparison of these two neural networks based on performance is also given

  14. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. 23rd Workshop of the Italian Neural Networks Society (SIREN)

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2014-01-01

    This volume collects a selection of contributions which has been presented at the 23rd Italian Workshop on Neural Networks, the yearly meeting of the Italian Society for Neural Networks (SIREN). The conference was held in Vietri sul Mare, Salerno, Italy during May 23-24, 2013. The annual meeting of SIREN is sponsored by International Neural Network Society (INNS), European Neural Network Society (ENNS) and IEEE Computational Intelligence Society (CIS). The book – as well as the workshop-  is organized in two main components, a special session and a group of regular sessions featuring different aspects and point of views of artificial neural networks, artificial and natural intelligence, as well as psychological and cognitive theories for modeling human behaviors and human machine interactions, including Information Communication applications of compelling interest.  .

  16. Modeling of steam generator in nuclear power plant using neural network ensemble

    International Nuclear Information System (INIS)

    Lee, S. K.; Lee, E. C.; Jang, J. W.

    2003-01-01

    Neural network is now being used in modeling the steam generator is known to be difficult due to the reverse dynamics. However, Neural network is prone to the problem of overfitting. This paper investigates the use of neural network combining methods to model steam generator water level and compares with single neural network. The results show that neural network ensemble is effective tool which can offer improved generalization, lower dependence of the training set and reduced training time

  17. Wind power prediction based on genetic neural network

    Science.gov (United States)

    Zhang, Suhan

    2017-04-01

    The scale of grid connected wind farms keeps increasing. To ensure the stability of power system operation, make a reasonable scheduling scheme and improve the competitiveness of wind farm in the electricity generation market, it's important to accurately forecast the short-term wind power. To reduce the influence of the nonlinear relationship between the disturbance factor and the wind power, the improved prediction model based on genetic algorithm and neural network method is established. To overcome the shortcomings of long training time of BP neural network and easy to fall into local minimum and improve the accuracy of the neural network, genetic algorithm is adopted to optimize the parameters and topology of neural network. The historical data is used as input to predict short-term wind power. The effectiveness and feasibility of the method is verified by the actual data of a certain wind farm as an example.

  18. Introduction to neural networks in high energy physics

    International Nuclear Information System (INIS)

    Therhaag, J.

    2013-01-01

    Artificial neural networks are a well established tool in high energy physics, playing an important role in both online and offline data analysis. Nevertheless they are often perceived as black boxes which perform obscure operations beyond the control of the user, resulting in a skepticism against any results that may be obtained using them. The situation is not helped by common explanations which try to draw analogies between artificial neural networks and the human brain, for the brain is an even more complex black box itself. In this introductory text, I will take a problem-oriented approach to neural network techniques, showing how the fundamental concepts arise naturally from the demand to solve classification tasks which are frequently encountered in high energy physics. Particular attention is devoted to the question how probability theory can be used to control the complexity of neural networks. (authors)

  19. EDITORIAL: Why we need a new journal in neural engineering

    Science.gov (United States)

    Durand, Dominique M.

    2004-03-01

    The field of neural engineering crystallizes for many engineers and scientists an area of research at the interface between neuroscience and engineering. For the last 15 years or so, the discipline of neural engineering (neuroengineering) has slowly appeared at conferences as a theme or track. The first conference devoted entirely to this area was the 1st International IEEE EMBS Conference on Neural Engineering which took place in Capri, Italy in 2003. Understanding how the brain works is considered the ultimate frontier and challenge in science. The complexity of the brain is so great that understanding even the most basic functions will require that we fully exploit all the tools currently at our disposal in science and engineering and simultaneously develop new methods of analysis. While neuroscientists and engineers from varied fields such as brain anatomy, neural development and electrophysiology have made great strides in the analysis of this complex organ, there remains a great deal yet to be uncovered. The potential for applications and remedies deriving from scientific discoveries and breakthroughs is extremely high. As a result of the growing availability of micromachining technology, research into neurotechnology has grown relatively rapidly in recent years and appears to be approaching a critical mass. For example, by understanding how neuronal circuits process and store information, we could design computers with capabilities beyond current limits. By understanding how neurons develop and grow, we could develop new technologies for spinal cord repair or central nervous system repair following neurological disorders. Moreover, discoveries related to higher-level cognitive function and consciousness could have a profound influence on how humans make sense of their surroundings and interact with each other. The ability to successfully interface the brain with external electronics would have enormous implications for our society and facilitate a

  20. Polarity-specific high-level information propagation in neural networks.

    Science.gov (United States)

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  1. ChainMail based neural dynamics modeling of soft tissue deformation for surgical simulation.

    Science.gov (United States)

    Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-07-20

    Realistic and real-time modeling and simulation of soft tissue deformation is a fundamental research issue in the field of surgical simulation. In this paper, a novel cellular neural network approach is presented for modeling and simulation of soft tissue deformation by combining neural dynamics of cellular neural network with ChainMail mechanism. The proposed method formulates the problem of elastic deformation into cellular neural network activities to avoid the complex computation of elasticity. The local position adjustments of ChainMail are incorporated into the cellular neural network as the local connectivity of cells, through which the dynamic behaviors of soft tissue deformation are transformed into the neural dynamics of cellular neural network. Experiments demonstrate that the proposed neural network approach is capable of modeling the soft tissues' nonlinear deformation and typical mechanical behaviors. The proposed method not only improves ChainMail's linear deformation with the nonlinear characteristics of neural dynamics but also enables the cellular neural network to follow the principle of continuum mechanics to simulate soft tissue deformation.

  2. Calcium signaling mediates five types of cell morphological changes to form neural rosettes.

    Science.gov (United States)

    Hříbková, Hana; Grabiec, Marta; Klemová, Dobromila; Slaninová, Iva; Sun, Yuh-Man

    2018-02-12

    Neural rosette formation is a critical morphogenetic process during neural development, whereby neural stem cells are enclosed in rosette niches to equipoise proliferation and differentiation. How neural rosettes form and provide a regulatory micro-environment remains to be elucidated. We employed the human embryonic stem cell-based neural rosette system to investigate the structural development and function of neural rosettes. Our study shows that neural rosette formation consists of five types of morphological change: intercalation, constriction, polarization, elongation and lumen formation. Ca 2+ signaling plays a pivotal role in the five steps by regulating the actions of the cytoskeletal complexes, actin, myosin II and tubulin during intercalation, constriction and elongation. These, in turn, control the polarizing elements, ZO-1, PARD3 and β-catenin during polarization and lumen production for neural rosette formation. We further demonstrate that the dismantlement of neural rosettes, mediated by the destruction of cytoskeletal elements, promotes neurogenesis and astrogenesis prematurely, indicating that an intact rosette structure is essential for orderly neural development. © 2018. Published by The Company of Biologists Ltd.

  3. Neural networks and their potential application to nuclear power plants

    International Nuclear Information System (INIS)

    Uhrig, R.E.

    1991-01-01

    A network of artificial neurons, usually called an artificial neural network is a data processing system consisting of a number of highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks exhibit characteristics and capabilities not provided by any other technology. Neural networks may be designed so as to classify an input pattern as one of several predefined types or to create, as needed, categories or classes of system states which can be interpreted by a human operator. Neural networks have the ability to recognize patterns, even when the information comprising these patterns is noisy, sparse, or incomplete. Thus, systems of artificial neural networks show great promise for use in environments in which robust, fault-tolerant pattern recognition is necessary in a real-time mode, and in which the incoming data may be distorted or noisy. The application of neural networks, a rapidly evolving technology used extensively in defense applications, alone or in conjunction with other advanced technologies, to some of the problems of operating nuclear power plants has the potential to enhance the safety, reliability and operability of nuclear power plants. The potential applications of neural networking include, but are not limited to diagnosing specific abnormal conditions, identification of nonlinear dynamics and transients, detection of the change of mode of operation, control of temperature and pressure during start-up, signal validation, plant-wide monitoring using autoassociative neural networks, monitoring of check valves, modeling of the plant thermodynamics, emulation of core reload calculations, analysis of temporal sequences in NRC's ''licensee event reports,'' and monitoring of plant parameters

  4. Embedding responses in spontaneous neural activity shaped through sequential learning.

    Directory of Open Access Journals (Sweden)

    Tomoki Kurikawa

    Full Text Available Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, "memories-as-bifurcations," that differs from the traditional "memories-as-attractors" viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in

  5. Neutron spectra unfolding in Bonner spheres spectrometry using neural networks

    International Nuclear Information System (INIS)

    Kardan, M.R.; Setayeshi, S.; Koohi-Fayegh, R.; Ghiassi-Nejad, M.

    2003-01-01

    The neural network method has been used for the unfolding of neutron spectra in neutron spectrometry by Bonner spheres. A back propagation algorithm was used for training of neural networks 4mm x 4 mm bare LiI(Eu) and in a polyethylene sphere set: 2, 3, 4, 5, 6, 7, 8, 10, 12, 18 inch diameter have been used for unfolding of neutron spectra. Neural networks were trained by 199 sets of neutron spectra, which were subdivided into 6, 8, 10, 12, 15 and 20 energy bins and for each of them an appropriate neural network was designed and trained. The validation was performed by the 21 sets of neutron spectra. A neural network with 10 energy bins which had a mean value of error of 6% for dose equivalent estimation of spectra in the validation set showed the best results. The obtained results show that neural networks can be applied as an effective method for unfolding neutron spectra especially when the main target is neutron dosimetry. (author)

  6. Linear programming based on neural networks for radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Xingen Wu; Limin Luo

    2000-01-01

    In this paper, we propose a neural network model for linear programming that is designed to optimize radiotherapy treatment planning (RTP). This kind of neural network can be easily implemented by using a kind of 'neural' electronic system in order to obtain an optimization solution in real time. We first give an introduction to the RTP problem and construct a non-constraint objective function for the neural network model. We adopt a gradient algorithm to minimize the objective function and design the structure of the neural network for RTP. Compared to traditional linear programming methods, this neural network model can reduce the time needed for convergence, the size of problems (i.e., the number of variables to be searched) and the number of extra slack and surplus variables needed. We obtained a set of optimized beam weights that result in a better dose distribution as compared to that obtained using the simplex algorithm under the same initial condition. The example presented in this paper shows that this model is feasible in three-dimensional RTP. (author)

  7. SOLAR PHOTOVOLTAIC OUTPUT POWER FORECASTING USING BACK PROPAGATION NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    B. Jency Paulin

    2016-01-01

    Full Text Available Solar Energy is an important renewable and unlimited source of energy. Solar photovoltaic power forecasting, is an estimation of the expected power production, that help the grid operators to better manage the electric balance between power demand and supply. Neural network is a computational model that can predict new outcomes from past trends. The artificial neural network is used for photovoltaic plant energy forecasting. The output power for solar photovoltaic cell is predicted on hourly basis. In historical dataset collection process, two dataset was collected and used for analysis. The dataset was provided with three independent attributes and one dependent attributes. The implementation of Artificial Neural Network structure is done by Multilayer Perceptron (MLP and training procedure for neural network is done by error Back Propagation (BP. In order to train and test the neural network, the datasets are divided in the ratio 70:30. The accuracy of prediction can be done by using various error measurement criteria and the performance of neural network is to be noted.

  8. Function of FEZF1 during early neural differentiation of human embryonic stem cells.

    Science.gov (United States)

    Liu, Xin; Su, Pei; Lu, Lisha; Feng, Zicen; Wang, Hongtao; Zhou, Jiaxi

    2018-01-01

    The understanding of the mechanism underlying human neural development has been hampered due to lack of a cellular system and complicated ethical issues. Human embryonic stem cells (hESCs) provide an invaluable model for dissecting human development because of unlimited self-renewal and the capacity to differentiate into nearly all cell types in the human body. In this study, using a chemical defined neural induction protocol and molecular profiling, we identified Fez family zinc finger 1 (FEZF1) as a potential regulator of early human neural development. FEZF1 is rapidly up-regulated during neural differentiation in hESCs and expressed before PAX6, a well-established marker of early human neural induction. We generated FEZF1-knockout H1 hESC lines using CRISPR-CAS9 technology and found that depletion of FEZF1 abrogates neural differentiation of hESCs. Moreover, loss of FEZF1 impairs the pluripotency exit of hESCs during neural specification, which partially explains the neural induction defect caused by FEZF1 deletion. However, enforced expression of FEZF1 itself fails to drive neural differentiation in hESCs, suggesting that FEZF1 is necessary but not sufficient for neural differentiation from hESCs. Taken together, our findings identify one of the earliest regulators expressed upon neural induction and provide insight into early neural development in human.

  9. Distorted Character Recognition Via An Associative Neural Network

    Science.gov (United States)

    Messner, Richard A.; Szu, Harold H.

    1987-03-01

    The purpose of this paper is two-fold. First, it is intended to provide some preliminary results of a character recognition scheme which has foundations in on-going neural network architecture modeling, and secondly, to apply some of the neural network results in a real application area where thirty years of effort has had little effect on providing the machine an ability to recognize distorted objects within the same object class. It is the author's belief that the time is ripe to start applying in ernest the results of over twenty years of effort in neural modeling to some of the more difficult problems which seem so hard to solve by conventional means. The character recognition scheme proposed utilizes a preprocessing stage which performs a 2-dimensional Walsh transform of an input cartesian image field, then sequency filters this spectrum into three feature bands. Various features are then extracted and organized into three sets of feature vectors. These vector patterns that are stored and recalled associatively. Two possible associative neural memory models are proposed for further investigation. The first being an outer-product linear matrix associative memory with a threshold function controlling the strength of the output pattern (similar to Kohonen's crosscorrelation approach [1]). The second approach is based upon a modified version of Grossberg's neural architecture [2] which provides better self-organizing properties due to its adaptive nature. Preliminary results of the sequency filtering and feature extraction preprocessing stage and discussion about the use of the proposed neural architectures is included.

  10. Neural network classifier of attacks in IP telephony

    Science.gov (United States)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  11. Differential neural network configuration during human path integration

    Science.gov (United States)

    Arnold, Aiden E. G. F; Burles, Ford; Bray, Signe; Levy, Richard M.; Iaria, Giuseppe

    2014-01-01

    Path integration is a fundamental skill for navigation in both humans and animals. Despite recent advances in unraveling the neural basis of path integration in animal models, relatively little is known about how path integration operates at a neural level in humans. Previous attempts to characterize the neural mechanisms used by humans to visually path integrate have suggested a central role of the hippocampus in allowing accurate performance, broadly resembling results from animal data. However, in recent years both the central role of the hippocampus and the perspective that animals and humans share similar neural mechanisms for path integration has come into question. The present study uses a data driven analysis to investigate the neural systems engaged during visual path integration in humans, allowing for an unbiased estimate of neural activity across the entire brain. Our results suggest that humans employ common task control, attention and spatial working memory systems across a frontoparietal network during path integration. However, individuals differed in how these systems are configured into functional networks. High performing individuals were found to more broadly express spatial working memory systems in prefrontal cortex, while low performing individuals engaged an allocentric memory system based primarily in the medial occipito-temporal region. These findings suggest that visual path integration in humans over short distances can operate through a spatial working memory system engaging primarily the prefrontal cortex and that the differential configuration of memory systems recruited by task control networks may help explain individual biases in spatial learning strategies. PMID:24808849

  12. Fast neutron spectra determination by threshold activation detectors using neural networks

    International Nuclear Information System (INIS)

    Kardan, M.R.; Koohi-Fayegh, R.; Setayeshi, S.; Ghiassi-Nejad, M.

    2004-01-01

    Neural network method was used for fast neutron spectra unfolding in spectrometry by threshold activation detectors. The input layer of the neural networks consisted of 11 neurons for the specific activities of neutron-induced nuclear reaction products, while the output layers were fast neutron spectra which had been subdivided into 6, 8, 10, 12, 15 and 20 energy bins. Neural network training was performed by 437 fast neutron spectra and corresponding threshold activation detector readings. The trained neural network have been applied for unfolding 50 spectra, which were not in training sets and the results were compared with real spectra and unfolded spectra by SANDII. The best results belong to 10 energy bin spectra. The neural network was also trained by detector readings with 5% uncertainty and the response of the trained neural network to detector readings with 5%, 10%, 15%, 20%, 25% and 50% uncertainty was compared with real spectra. Neural network algorithm, in comparison with other unfolding methods, is very fast and needless to detector response matrix and any prior information about spectra and also the outputs have low sensitivity to uncertainty in the activity measurements. The results show that the neural network algorithm is useful when a fast response is required with reasonable accuracy

  13. Modeling and Speed Control of Induction Motor Drives Using Neural Networks

    Directory of Open Access Journals (Sweden)

    V. Jamuna

    2010-08-01

    Full Text Available Speed control of induction motor drives using neural networks is presented. The mathematical model of single phase induction motor is developed. A new simulink model for a neural network-controlled bidirectional chopper fed single phase induction motor is proposed. Under normal operation, the true drive parameters are real-time identified and they are converted into the controller parameters through multilayer forward computation by neural networks. Comparative study has been made between the conventional and neural network controllers. It is observed that the neural network controlled drive system has better dynamic performance, reduced overshoot and faster transient response than the conventional controlled system.

  14. Biologically Inspired Modular Neural Control for a Leg-Wheel Hybrid Robot

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Wörgötter, Florentin; Laksanacharoen, Pudit

    2014-01-01

    In this article we present modular neural control for a leg-wheel hybrid robot consisting of three legs with omnidirectional wheels. This neural control has four main modules having their functional origin in biological neural systems. A minimal recurrent control (MRC) module is for sensory signal...... processing and state memorization. Its outputs drive two front wheels while the rear wheel is controlled through a velocity regulating network (VRN) module. In parallel, a neural oscillator network module serves as a central pattern generator (CPG) controls leg movements for sidestepping. Stepping directions...... or they can serve as useful modules for other module-based neural control applications....

  15. Vibration monitoring with artificial neural networks

    International Nuclear Information System (INIS)

    Alguindigue, I.

    1991-01-01

    Vibration monitoring of components in nuclear power plants has been used for a number of years. This technique involves the analysis of vibration data coming from vital components of the plant to detect features which reflect the operational state of machinery. The analysis leads to the identification of potential failures and their causes, and makes it possible to perform efficient preventive maintenance. Earlydetection is important because it can decrease the probability of catastrophic failures, reduce forced outgage, maximize utilization of available assets, increase the life of the plant, and reduce maintenance costs. This paper documents our work on the design of a vibration monitoring methodology based on neural network technology. This technology provides an attractive complement to traditional vibration analysis because of the potential of neural network to operate in real-time mode and to handle data which may be distorted or noisy. Our efforts have been concentrated on the analysis and classification of vibration signatures collected from operating machinery. Two neural networks algorithms were used in our project: the Recirculation algorithm for data compression and the Backpropagation algorithm to perform the actual classification of the patterns. Although this project is in the early stages of development it indicates that neural networks may provide a viable methodology for monitoring and diagnostics of vibrating components. Our results to date are very encouraging

  16. Neutron spectrometry using artificial neural networks

    International Nuclear Information System (INIS)

    Vega-Carrillo, Hector Rene; Martin Hernandez-Davila, Victor; Manzanares-Acuna, Eduardo; Mercado Sanchez, Gema A.; Pilar Iniguez de la Torre, Maria; Barquero, Raquel; Palacios, Francisco; Mendez Villafane, Roberto; Arteaga Arteaga, Tarcicio; Manuel Ortiz Rodriguez, Jose

    2006-01-01

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab ( R) program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem

  17. Development of teeth in chick embryos after mouse neural crest transplantations

    OpenAIRE

    Mitsiadis, Thimios A.; Chéraud, Yvonnick; Sharpe, Paul; Fontaine-Pérus, Josiane

    2003-01-01

    Teeth were lost in birds 70–80 million years ago. Current thinking holds that it is the avian cranial neural crest-derived mesenchyme that has lost odontogenic capacity, whereas the oral epithelium retains the signaling properties required to induce odontogenesis. To investigate the odontogenic capacity of ectomesenchyme, we have used neural tube transplantations from mice to chick embryos to replace the chick neural crest cell populations with mouse neural crest cells. The mouse/chick ...

  18. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  19. Google matrix analysis of C.elegans neural network

    Energy Technology Data Exchange (ETDEWEB)

    Kandiah, V., E-mail: kandiah@irsamc.ups-tlse.fr; Shepelyansky, D.L., E-mail: dima@irsamc.ups-tlse.fr

    2014-05-01

    We study the structural properties of the neural network of the C.elegans (worm) from a directed graph point of view. The Google matrix analysis is used to characterize the neuron connectivity structure and node classifications are discussed and compared with physiological properties of the cells. Our results are obtained by a proper definition of neural directed network and subsequent eigenvector analysis which recovers some results of previous studies. Our analysis highlights particular sets of important neurons constituting the core of the neural system. The applications of PageRank, CheiRank and ImpactRank to characterization of interdependency of neurons are discussed.

  20. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.