WorldWideScience

Sample records for hipoacusia sensorio neural

  1. Hipoacusia neurosensorial infantil

    OpenAIRE

    Santos Santos, Saturnino

    2004-01-01

    En nuestro medio existe un déficit de información acerca de la importancia de los factores de riesgo implicados en la aparición de hipoacusia neurosensorial infantil y de las etiologías encontradas. Se estudió retrospectivamente una población de 2.656 niños enviados a nuestro centro para valoración auditiva por presentar factores de riesgo. 481 niños fueron diagnosticados de hipoacusia neurosensorial uni o bilateral de cualquier grado. La edad media al diagnóstico de hipoacusia neurosensorial...

  2. HIPOACUSIA: TRASCENDENCIA, INCIDENCIA Y PREVALENCIA

    Directory of Open Access Journals (Sweden)

    Dra. Constanza Díaz

    2016-11-01

    Full Text Available La hipoacusia o discapacidad auditiva representa una condición prevalente en la población, afecta alrededor de 360 millones de personas en todo el mundo, determinando distintos niveles de discapacidad que van desde el aspecto físico hasta lo social y psicológico. El origen de la hipoacusia puede ser diverso, conocer sus causas y sus factores de riesgo asociados es primordial para el diagnóstico precoz y un tratamiento oportuno. Se espera que la incidencia y prevalencia de la hipoacusia aumente en forma importante en los próximos años debido al fenómeno de transición demográfica que se experimenta a nivel mundial. Es importante que el tratamiento y el enfoque de estos pacientes no solo se centre en la rehabilitación auditiva, si no también en la consejería y educación para la adherencia y los buenos resultados.

  3. Comportamiento de la hipoacusia neurosensorial en niños

    OpenAIRE

    Álvarez Amador, Héctor Eduardo; Vega Ulloa, Nuris; Castillo Toledo, Luis; Santana Álvarez, Jorge; Betancourt Camargo, María de los Ángeles; Miranda Ramos, María de los Ángeles

    2011-01-01

    Fundamento: la hipoacusia neurosensorial en el niño produce graves consecuencias en la adquisición del lenguaje, atributo importante para un aprendizaje y desempeño social adecuados. Objetivo: estudiar el comportamiento de la hipoacusia neurosensorial en niños en la provincia de Camagüey. Método: se realizó un estudio descriptivo sobre el comportamiento de la hipoacusia neurosensorial en niños de la provincia de Camagüey en el período comprendido de enero de 2007 a diciembre de 2009. El unive...

  4. Ataxia heredo-degenerativa associada a hipoacusia

    Directory of Open Access Journals (Sweden)

    José Antonio Levy

    1964-06-01

    Full Text Available São estudados três irmãos, respectivamente com 16, 8 e 6 anos de idade, todos do sexo masculino, com ataxia heredo-degenerativa associada, em dois dêles, a hipoacusia. Nos antecedentes há referência a moléstia semelhante em um avô e um tio-avô. É discutido o diagnóstico diferencial com a moléstia de Pièrre Marie, a doença de Charcot-Marie-Tooth, a síndrome de Refsum e a neurite intersticial hipertrófica, sendo acentuada a semelhança dos casos estudados com a moléstia de Friedreich. São feitos comentários à associação da doença de Friedreich com distúrbios da audição.

  5. HIPOACUSIA Y SISTEMA DE GARANTÍAS EXPLÍCITAS EN SALUD (GES)

    OpenAIRE

    Torrente, Dra. Mariela

    2016-01-01

    El Sistema de Garantías Explícitas en Salud (GES) incorpora tres patologías que se relacionan con hipoacusia: hipoacusia en el mayor de 65 años, hipoacusia bilateral del prematuro y tratamiento de la hipoacusia moderada y severa en el menor de 2 años. El presente artículo realiza un análisis crítico de las guías clínicas con énfasis en aspectos a mejorar.

  6. Aspectos éticos en el tamizaje de hipoacusia neonatal en Chile

    OpenAIRE

    Cardemil M, Felipe

    2012-01-01

    La hipoacusia neonatal representa una de las anormalidades congênitas más frecuentes. La importancia radica en que si no se detecta oportunamente, impacta en el desarrollo del lenguaje, en las habilidades de comunicación, y en el desarrollo cognitivo y social de las personas. En Chile no se tienen estimaciones certeras de la incidencia poblacional en los recién nacidos que padecen esta condición, debido a que no existe un programa nacional de tamizaje neonatal de hipoacusia. En el presente ar...

  7. Potenciales provocados auditivos en niños con riesgo neonatal de hipoacusia

    Directory of Open Access Journals (Sweden)

    Saúl Garza Morales

    1997-02-01

    Full Text Available Los potenciales provocados auditivos del tallo cerebral (PPATC son un método sencillo y no invasor de evaluación de la función auditiva, que se utiliza ampliamente en niños para detectar tempranamente hipoacusia. Entre abril de 1992 y mayo de 1994, se estudiaron 400 niños mexicanos que presentaban, al menos, un factor de riesgo neonatal de hipoacusia. La media de la edad de los niños estudiados fue 6,6 meses y la media de la edad gestacional al nacer, 35,1 semanas. El 51% de ellos fueron tratados con amikacina. Se registraron 1 427 factores de riesgo (3,5 por niño, entre los que predominaron la exposición a ototóxicos, la hiperbilirrubinemia y el peso al nacer menor de 1 500 g. En 27% se encontraron alteraciones auditivas de tipo periférico y en 13%, ausencia de respuesta a estímulos auditivos. El bajo peso y la menor edad gestacional al nacer, la concentración máxima de bilirrubina en el suero, la presencia de sepsis, la hemorragia subependimaria o intraventricular, la ventilación mecánica y la exposición a ototóxicos se asociaron significativamente con la presencia de hipoacusia grave o profunda.

  8. Valoración médico legal de la hipoacusia

    OpenAIRE

    Maikel Vargas Sanabria

    2012-01-01

    En la presente revisión se repasan los aspectos más básicos del sonido y el proceso de audición, en primer lugar los aspectos físicos del primero y luego los aspectos anatómicos y fisiológicos de dicho proceso, para que el perito tenga a mano los elementos necesarios para efectuar las pruebas clínicas y enviar los exámenes complementarios que considere pertinentes de acuerdo a su criterio para una adecuada valoración de la hipoacusia de origen laboral o secundaria a una tramaThe present artic...

  9. Neuropatia óptica dominante associada à hipoacusia e apresentação tardia

    Directory of Open Access Journals (Sweden)

    Eduardo Scaldini Buscacio

    2013-10-01

    Full Text Available A neuropatia óptica de Kjer, ou atrofia óptica dominante, é a mais frequente das neuropatias ópticas familiares. Trata-se de uma atrofia óptica de caráter autossômico dominante que se dá por uma alteração no gene OPA1, no cromossomo 3q28, com penetrância de 98% Apenas 15% dos casos possuem acuidade visual de 0,1 ou pior, apresentando ainda diferentes graus de atrofia do disco. Este relato objetiva descrever as características genéticas e clínicas da doença, bem como apresentar medidas de aconselhamento familiar. Para isso, será relatado um caso clínico de atrofia óptica dominante no qual se constata perda acentuada da acuidade visual, início de manifestações atipicamente tardias e hipoacusia bilateral.

  10. Evaluación del riesgo de desarrollar hipoacusia en el colectivo de alumnos de conservatorios de música.

    OpenAIRE

    Santirso-Sánchez, Sara

    2013-01-01

    El presente trabajo analiza el riesgo de desarrollar hipoacusia inducida por el ruido en el colectivo de estudiantes de música como consecuencia de su propia actividad de práctica, estudio y ensayo con el instrumento, al estar expuestos de forma prolongada a sonidos de elevada intensidad. Por una parte se ha examinado la literatura sobre los problemas de audición en los músicos y se han revisado los conceptos físico-biológicos básicos en la audición, con objeto de identifica...

  11. Análisis Molecular de las Mutaciones 2299delG y C759F en Individuos Colombianos con Retinitis Pigmentosa e Hipoacusia Neurosensorial

    OpenAIRE

    López, Greizy; Gelvez, Nancy Yaneth; Urrego, Luisa Fernanda; Florez, Silvia; Medina, David; Rodríguez, Vicente; Tamayo, Marta Lucía

    2014-01-01

    Objetivo: Determinar la presencia de las mutaciones 2299delG y C759F en 37 individuos colombianos no relacionados con asociación de RP e hipoacusia neurosensorial. Materiales y métodos: análisis de secuencia directa del exón 13 del gen USH2A en todos los individuos seleccionados para el estudio. Resultados: la mutación 2299delG fue observada únicamente en individuos con Síndrome de Usher tipo II, mientras que la mutación C759F, no fue observada en ninguno de los individuos del estudio. Obj...

  12. Perfil epidemiológico de la hipoacusia en un personal de ala rotatoria de la compañía Guaymaral (Policía Nacional De Colombia)

    OpenAIRE

    Vásquez Quintero, Rafael

    2013-01-01

    La hipoacusia neurosensorial es la pérdida de la audición producida por la lesión de elementos neurosensoriales cocleares o del nervio coclear debida a medios físicos o de otra naturaleza dentro de los cuales principalmente se encuentra el ruido. Una décima parte de las hipoacusias están relacionadas con la exposición al ruido laboral, sin embargo existen otros factores que se deben tener en cuenta como la exposición a ruido fuera del trabajo En este estudio descriptivo obse...

  13. Auditory evoked potentials in children at neonatal risk for hypoacusis Potenciales provocados auditivos en niños con riesgo neonatal de hipoacusia

    Directory of Open Access Journals (Sweden)

    Saúl Garza Morales

    1997-10-01

    Full Text Available Brainstem auditory evoked potentials provide a simple, noninvasive method of evaluating hearing function and have been widely used for early detection of hypoacusis in children. Between April 1992 and May 1994, a study was done of 400 Mexican children who presented at least one neonatal risk factor for hearing impairment. The average age of the children studied was 6.6 months and their average gestational age at birth was 35.1 weeks. Just over half of the children had been treated with amikacin. The study found 1427 risk factors (about 3.5 per child, the most common ones being exposure to ototoxic substances, hyperbilirubinemia, and birthweight Los potenciales provocados auditivos del tallo cerebral son un método sencillo y no invasor de evaluación de la función auditiva, que se utiliza ampliamente en niños para detectar tempranamente hipoacusia. Entre abril de 1992 y mayo de 1994, se estudiaron 400 niños mexicanos que presentaban, al menos, un factor de riesgo neonatal de hipoacusia. La media de la edad de los niños estudiados fue 6,6 meses y la media de la edad gestacional al nacer, 35,1 semanas. El 51% de ellos fueron tratados con amikacina. Se registraron 1427 factores de riesgo (3,5 por niño, entre los que predominaron la exposición a ototóxicos, la hiperbilirrubinemia y el peso al nacer <1 500 g. En 27% se encontraron alteraciones auditivas de tipo periférico y en 13%, ausencia de respuesta a estímulos auditivos. El bajo peso y la menor edad gestacional al nacer, la concentración máxima de bilirrubina en el suero, la presencia de sepsis, la hemorragia subependimaria o intraventricular, la ventilación mecánica y la exposición a ototóxicos se asociaron significativamente con la presencia de hipoacusia grave o profunda.

  14. Potenciales provocados auditivos en niños con riesgo neonatal de hipoacusia Auditory evoked potentials in children at neonatal risk for hypoacusis

    Directory of Open Access Journals (Sweden)

    Saúl Garza Morales

    1997-02-01

    Full Text Available Los potenciales provocados auditivos del tallo cerebral (PPATC son un método sencillo y no invasor de evaluación de la función auditiva, que se utiliza ampliamente en niños para detectar tempranamente hipoacusia. Entre abril de 1992 y mayo de 1994, se estudiaron 400 niños mexicanos que presentaban, al menos, un factor de riesgo neonatal de hipoacusia. La media de la edad de los niños estudiados fue 6,6 meses y la media de la edad gestacional al nacer, 35,1 semanas. El 51% de ellos fueron tratados con amikacina. Se registraron 1 427 factores de riesgo (3,5 por niño, entre los que predominaron la exposición a ototóxicos, la hiperbilirrubinemia y el peso al nacer menor de 1 500 g. En 27% se encontraron alteraciones auditivas de tipo periférico y en 13%, ausencia de respuesta a estímulos auditivos. El bajo peso y la menor edad gestacional al nacer, la concentración máxima de bilirrubina en el suero, la presencia de sepsis, la hemorragia subependimaria o intraventricular, la ventilación mecánica y la exposición a ototóxicos se asociaron significativamente con la presencia de hipoacusia grave o profunda.Auditory evoked potentials of the brain stem (AEPBS provide a simple, noninvasive method of evaluating hearing function and have been widely used for early detection of hypoacusis in children. Between April 1992 and May 1994, a study was done of 400 Mexican children who presented at least one neonatal risk factor for hearing impairment. The average age of the children studied was 6.6 months and their average gestational age at birth was 35.1 weeks. Just over half of them (51% had been treated with amikacin. The study found 1 427 risk factors (3.5 per child, the most common ones being exposure to ototoxic substances, hyperbilirubinemia, and birthweight of less that 1 500 g. In 27% of the children, peripheral auditory changes were found, and 13% did not respond to auditory stimuli. Low birthweight and young gestational age at birth, high

  15. Evaluación de la hiperbilirrubinemia como factor de riesgo de hipoacusia neurosensorial en el programa de screening universal de hipoacusia infantil del Complejo Hospitalario Universitario Universitario Insular Materno Infantil de Gran Canaria ente los años 2007 al 2011

    OpenAIRE

    Corujo Santana, Cándido

    2014-01-01

    Programa de doctorado: Avances en Traumatología, Medicina del Deporte y Cuidados de Heridas. [ES] La bilirrubina es un pigmento altamente tóxico para los sitemas biológicos, especialmente para el sistema nervioso. El Joint Committee on Infant Heraing, en 1994, establece la lista de patologías en las que la incidencia de hipoacusia es mayor que las de la población general. En España, la CODEPEH ha confeccionado una lista de indicadores de riesgo (actualizada en 2010) que, cuando estén prese...

  16. Hipoacusia neurosensorial en un síndrome de Noonan y secuencia Poland Neurosensory hypoacusis in a Noonan's syndrome and Poland's sequence

    Directory of Open Access Journals (Sweden)

    Julianis Loraine Quintero Noa

    2010-09-01

    Full Text Available Se calcula que el 50 % de los casos de sordera profunda en la infancia puede ser de origen genético. Se presenta el caso de un niño de 9 años, atendido en los Servicios de Otorrinolaringología y Genética del Hospital Pediátrico Docente «William Soler», por presentar hipoacusia neurosensorial grave unilateral y displasia congénita de Mondini en el oído izquierdo, del lado contrario a la hipoplasia del músculo pectoral mayor, lo cual coincide con un síndrome de Noonan y secuencia de Poland, que resulta de especial interés. Se constató la hipoacusia con audiometría tonal y potencial evocado auditivo de tallo cerebral. En la tomografía del oído se observó una hipoplasia coclear con agenesia de la espira apical. Se destacan las manifestaciones clínicas y la importancia del estudio otológico e imaginólogico en el diagnóstico de la pérdida auditiva.It is estimated that the 50% of cases of deep deafness during childhood may be or genetic origin. This is the case of a child aged 9 seen in Otorhinolaryngology and Genetics Services of the "Wiliam Soler" Teaching Children Hospital due to a unilateral severe neurosensory hypoacusis and Mondini's congenital dysplasia in left ear contralateral to the major pectoral muscle hypoplasia, an interesting situation. Hypoacusis was confirmed using tone audiometry and auditory evoked potential of brain stem. Ear tomography demonstrated a cochlear hypoplasia with agenesis of apical spiral. The clinical manifestations and the significance of the ontological and imaging study in diagnosis of auditory loss are emphasized.

  17. Beneficios económicos del implante coclear para la hipoacusia sensorineural profunda Economic benefits of the cochlear implant for treating profound sensorineural hearing loss

    Directory of Open Access Journals (Sweden)

    Augusto Peñaranda

    2012-04-01

    Full Text Available OBJETIVO: Evaluar el costo-beneficio (CB, costo-utilidad (CU y costo-efectividad (CE de la implantación coclear, comparándola con el uso de audífonos en niños con hipoacusia sensorineural profunda bilateral. MÉTODOS: Se empleó la técnica no paramétrica Propensity Score Matching (PSM para realizar la evaluación de impacto económico del implante y así llevar a cabo los análisis CB, CU y CE. Se utilizó información primaria, tomada aleatoriamente a 100 pacientes: 62 intervenidos quirúrgicamente con el implante coclear (grupo de tratamiento y 38 pertenecientes al grupo de control o usuarios de audífono para tratar la hipoacusia sensorineural profunda. RESULTADOS: Se halló un diferencial de costos económicos -en beneficio del implante coclear- cercano a US$ 204 000 entre el implante y el uso de audífonos durante la esperanza de vida de los pacientes analizados. Dicha cifra indica los mayores gastos que deben cubrir los pacientes con audífono. Con este valor descontado, el indicador costo-beneficio señala que por cada dólar invertido en el implante coclear, para tratar al paciente, el retorno de la inversión es US$ 2,07. CONCLUSIONES: El implante coclear genera beneficios económicos para el paciente. También produce utilidades en salud dado que se encontró una relación positiva de CU (ganancia en decibeles y CE (ganancia en discriminación del lenguaje.OBJECTIVE: Evaluate the cost-benefit, cost-utility, and cost-effectiveness of cochlear implantation, comparing it to the use of hearing aids in children with profound bilateral sensorineural hearing loss. METHODS: The nonparametric propensity score matching method was used to carry out an economic and impact assessment of the cochlear implant and then perform cost-benefit, cost-utility, and cost-effectiveness analyses. Primary information was used, taken randomly from 100 patients: 62 who received cochlear implants (treatment group and 38 belonging to the control group who used

  18. Color del iris e hipoacusia en el Síndrome de Waardenburg. Pinar del Río, Cuba Color of the iris and hypoacusis in Waardenburg Syndrome. Pinar del Rio, Cuba

    Directory of Open Access Journals (Sweden)

    Fidel Castro Pérez

    2012-06-01

    Full Text Available Introducción: Aunque se han descrito hipoacusia neurosensorial y cambios de color en el iris, la relación entre estos no ha sido estudiada previamente. Objetivos: Describir y analizar la posible asociación de la hipoacusia y profundidad de ésta con el color del iris en una familia afectada con el síndrome, lo que constituiría un nuevo aporte al conocimiento del Síndrome de Waardenburg (SW. Material y Método: Se realizó un estudio de casos, observacional, transversal y descriptivo con algunos aspectos analíticos en personas con SW del Municipio Sandino. Se utilizaron las medidas de resumen para variables cualitativas y la prueba de X² para medir asociación al 95 % de certeza. Resultados: 15 individuos presentaron hipoacusia neurosensorial de diferentes distribución e intensidad, con predominio de los ojos pardos y azules bilaterales. Se detectó mayor frecuencia de individuos hipoacúsicos entre los que tenían ojos azules con asociación entre las dos variables (X²= 6,47, gl = 1; p = 0.01. La intensidad de la hipoacusia fue mayor entre los individuos con ojos azules (85.7 % con hipoacusia severa o profunda 3 veces superior que en los otros colores de los ojos. Conclusiones: Existe relación entre el color azul del iris y la presencia de la hipoacusia y mayor intensidad de esta última en individuos con SW.Background: Although sensorineural hearing loss and iris pigmentary changes have been described, the association between these two elements has not been previously studied. Objectives: to describe and analyze the possible association of hypoacusis and the intensity of this with the color of the iris in a family suffering from this syndrome; which will constitute a new contribution to the understanding of Waardenburg Syndrome (WS. Material and Method: an observational, cross-sectional and descriptive case-study was carried out having some analytic aspects in people suffering from WS in Sandino municipality, Pinar del Rio. Measures

  19. Detección precoz de hipoacusia neonatal no congénita en recién nacidos sometidos a ventilación mecánica en una unidad de neonatología de junio – septiembre 2012

    OpenAIRE

    Díaz Torres, Mónica; Duque Cevallos, Sandra Marcela

    2013-01-01

    Introducción: La ventilación mecánica es una de las causas de hipoacusia en recién nacidos ingresados a una Unidad de Cuidados Intensivos Neonatales. Objetivo: Establecer el nivel de riesgo de los neonatos sometidos a ventilación mecánica de desarrollar hipoacusia no congénita en el Hospital Enrique Garcés de Quito durante Junio a septiembre del 2012. Sujeto: Se investigaron 101 pacientes que fueron Hospitalizados en la Unidad de Cuidados Intensivos Neonatales de los cuales el 20,79% re...

  20. Estudo da prevalência de hipoacusia em indivíduos com diabetes mellitus tipo 1 Hearing loss prevalence in patients with diabetes mellitus type 1

    Directory of Open Access Journals (Sweden)

    Diego Augusto Malucelli

    2012-06-01

    Full Text Available Diabetes mellitus (DM é uma doença crônica causada pela não produção e uso inadequado de insulina. Enfermidade crônico-degenerativa. Complicações crônicas do DM, no sistema auditivo, podem causar atrofia do gânglio espiral, degeneração da bainha de mielina do VIII par craniano, diminuição de fibras nervosas na lâmina espiral ou espessamento das paredes capilares da estria vascular e das pequenas artérias. OBJETIVO: Verificar os limiares auditivos em indivíduos portadores de DM tipo 1. MATERIAL E MÉTODOS: Estudo clínico envolvendo 60 indivíduos, divididos em Grupo Estudo (GE e Grupo Controle (GC, indivíduos diabéticos e não diabéticos. Realizada anamnese, exame físico, otorrinolaringológico e exame audiométrico. RESULTADOS:Quanto aos limiares de audibilidade, no GE, houve diferença estatisticamente significante nas frequências 250, 500, 10.000, 11.200, 12.500, 14.000 e 16.000 Hz em ambas as orelhas e médias das orelhas. Na comparação dos GE e GC, houve diferença estatisticamente significativa com maior probabilidade de ocorrência de hipoacusia em alguma frequência independente da orelha testada no GE. CONCLUSÕES: Houve diferenças estatisticamente significativas nos achados audiológicos no GE quando comparado com GC, justificando avaliação audiológica completa em pacientes diabéticos tipo 1, incluindo audiometria de altas frequências.Diabetes mellitus (DM is a chronic degenerative disease that impairs normal insulin production and use. DM chronic auditory complications may include spiral ganglion atrophy, degeneration of the vestibulocochlear nerve myelin sheath, reduction of the number of spiral lamina nerve fibers, and thickening of the capillary walls of the stria vascularis and small arteries. OBJECTIVE: This paper aims to verify the hearing thresholds of individuals with type 1 DM. MATERIALS AND METHODS: Sixty patients were enrolled in this trial and divided into case and control groups featuring

  1. Hypoacousis prevalence in Kaiowá and Guarani indigenous children Prevalência de hipoacusia em crianças indígenas Kaiowá e Guarani

    Directory of Open Access Journals (Sweden)

    Renata Palópoli Pícoli

    2006-06-01

    Full Text Available OBJECTIVES: to determine hypoacousis prevalence in Kaiowá and Guarani indigenous children. METHODS: a cross-sectional study was performed using a sample of 126 indigenous children from zero to 59 months old from the Caarapó Indian Reserve, Mato Grosse do Sul, Brazil. Hearing ability screening was performed by measuring transient evoked otoacoustic emissions. Children with hearing impairment were retested. Confirmed cases following retest were referred to imitanciometry testing. RESULTS: during hearing ability screening, 25 (23.6% children showed hearing impairment. Seventeen children had normal outcomes during retest and six of them confirmed hearing impairment and were referred to imitanciometry testing. Hypoacousis prevalence identified by the study reached 5.6%, 3 (2.8% and 3 (2.8% suggestive of conductive and sensorineural types, respectively. The last ones were referred to complementary otorhinolaryngologic assessment for diagnosis confirmation. Hearing impairment cases determined by this study were not statistically significant when related to gender and age. CONCLUSIONS: problems concerning the prevalence of hearing impairment determined in the focused population suggest the need for hearing health programs to be developed with other child health programs.OBJETIVOS: identificar a prevalência de hipoacusia em crianças indígenas Kaiowá e Guarani. MÉTODOS: estudo transversal, com uma amostra de 126 crianças indígenas de zero a 59 meses da Terra Indígena de Caarapó, em Mato Grosso do Sul, Brasil. As crianças foram submetidas ao exame das emissões otoacústicas evocadas transitórias, que serviu como triagem auditiva. O reteste foi realizado nas crianças que apresentaram resultado alterado na triagem auditiva. Os casos que, no reteste, permaneceram alterados foram encaminhados para o exame da imitanciometria. RESULTADOS: na triagem auditiva, foram identificadas 25 (23,6% crianças com resultado alterado; dessas, 17 apresentaram

  2. Efeito da timpanoplastia no zumbido de pacientes com hipoacusia condutiva: seguimento de seis meses The effect of timpanoplasty on tinnitus in patients with conductive hearing loss: a six month follow-up

    Directory of Open Access Journals (Sweden)

    Adriana da Silva Lima

    2007-06-01

    Full Text Available O timpanoplastia tem como objetivos erradicar a doença da orelha média e restaurar os mecanismos de condução sonora. Contudo, alguns pacientes apresentam incômodo com o zumbido e muitas vezes questionam o médico sobre os resultados da cirurgia em relação ao zumbido. OBJETIVO: Avaliar a evolução do zumbido em pacientes com hipoacusia condutiva após timpanoplastia. Forma de Estudo: Coorte prospectiva. CASUÍSTICA E MÉTODO: Foram avaliados 23 pacientes com queixa de zumbido e diagnóstico de otite média crônica simples com indicação cirúrgica. Os pacientes foram submetidos a um protocolo de investigação médica e audiológica do zumbido antes, 30 e 180 dias após a timpanoplastia. RESULTADOS: 82,6% dos pacientes apresentaram melhora ou abolição do zumbido. Melhora significante do incômodo do zumbido no pré-operatório (5,26 em relação ao pós-operatório (1,91 com 30 e 180 dias, assim como entre o incômodo da perda auditiva pré-operatória (6,56 e pós-operatória (3,65 e 2,91. A audiometria revelou melhora do limiar tonal em todas as freqüências, com exceção de 8KHz, havendo fechamento ou gap máximo de 10dB NA em 61% dos casos. Pega total do enxerto em 78% dos casos. CONCLUSÃO: Além da melhora da perda auditiva, a timpanoplastia também proporciona bons resultados sobre o controle do zumbido.Tympanoplasty is done to eradicate ear pathology and to restore the conductive hearing mechanism (eardrum and ossicles. Some patients, however, do not tolerate tinnitus and question physicians about the results of surgery when tinnitus persists. AIM: to evaluate the progression of tinnitus in patients with conductive hearing loss after tympanoplasty. STUDY DESIGN: a prospective cohort study. Material and Methods: 23 consecutive patients with tinnitus due to chronic otitis media underwent tympanoplasty. The patients underwent a medical and audiological protocol for tinnitus before and after tympanoplasty. RESULTS: 82.6% of

  3. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  4. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  5. Evolvable synthetic neural system

    Science.gov (United States)

    Curtis, Steven A. (Inventor)

    2009-01-01

    An evolvable synthetic neural system includes an evolvable neural interface operably coupled to at least one neural basis function. Each neural basis function includes an evolvable neural interface operably coupled to a heuristic neural system to perform high-level functions and an autonomic neural system to perform low-level functions. In some embodiments, the evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy.

  6. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  7. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  8. Neural Tube Defects

    Science.gov (United States)

    Neural tube defects are birth defects of the brain, spine, or spinal cord. They happen in the ... that she is pregnant. The two most common neural tube defects are spina bifida and anencephaly. In ...

  9. Aprendizaje de la lectoescritura en el alumnado con hipoacusia

    OpenAIRE

    Moya Salvador, Beatriz

    2015-01-01

    Una de las principales dificultades de la poblaci??n con d??ficit auditivo es el aprendizaje de la lectoescritura. Puesto que recientemente, el trece de febrero de 2015, el Congreso aprob?? la no ley para fomentar el ajedrez e implantarlo como asignatura en las escuelas, en este proyecto se ha desarrollado un m??todo de apoyo a la lectura y escritura basado en el ajedrez, para el alumnado con deficiencias auditivas. El objetivo de este trabajo es que el alumnado supere sus difi...

  10. Neural tissue-spheres

    DEFF Research Database (Denmark)

    Andersen, Rikke K; Johansen, Mathias; Blaabjerg, Morten

    2007-01-01

    By combining new and established protocols we have developed a procedure for isolation and propagation of neural precursor cells from the forebrain subventricular zone (SVZ) of newborn rats. Small tissue blocks of the SVZ were dissected and propagated en bloc as free-floating neural tissue...... content, thus allowing experimental studies of neural precursor cells and their niche...

  11. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  13. Evolvable Neural Software System

    Science.gov (United States)

    Curtis, Steven A.

    2009-01-01

    The Evolvable Neural Software System (ENSS) is composed of sets of Neural Basis Functions (NBFs), which can be totally autonomously created and removed according to the changing needs and requirements of the software system. The resulting structure is both hierarchical and self-similar in that a given set of NBFs may have a ruler NBF, which in turn communicates with other sets of NBFs. These sets of NBFs may function as nodes to a ruler node, which are also NBF constructs. In this manner, the synthetic neural system can exhibit the complexity, three-dimensional connectivity, and adaptability of biological neural systems. An added advantage of ENSS over a natural neural system is its ability to modify its core genetic code in response to environmental changes as reflected in needs and requirements. The neural system is fully adaptive and evolvable and is trainable before release. It continues to rewire itself while on the job. The NBF is a unique, bilevel intelligence neural system composed of a higher-level heuristic neural system (HNS) and a lower-level, autonomic neural system (ANS). Taken together, the HNS and the ANS give each NBF the complete capabilities of a biological neural system to match sensory inputs to actions. Another feature of the NBF is the Evolvable Neural Interface (ENI), which links the HNS and ANS. The ENI solves the interface problem between these two systems by actively adapting and evolving from a primitive initial state (a Neural Thread) to a complicated, operational ENI and successfully adapting to a training sequence of sensory input. This simulates the adaptation of a biological neural system in a developmental phase. Within the greater multi-NBF and multi-node ENSS, self-similar ENI s provide the basis for inter-NBF and inter-node connectivity.

  14. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  15. Neural Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — As part of the Electrical and Computer Engineering Department and The Institute for System Research, the Neural Systems Laboratory studies the functionality of the...

  16. Neural Networks: Implementations and Applications

    OpenAIRE

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  17. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  18. Consciousness and neural plasticity

    DEFF Research Database (Denmark)

    changes or to abandon the strong identity thesis altogether. Were one to pursue a theory according to which consciousness is not an epiphenomenon to brain processes, consciousness may in fact affect its own neural basis. The neural correlate of consciousness is often seen as a stable structure, that is...

  19. Dynamics of neural cryptography.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  20. Dynamics of neural cryptography

    International Nuclear Information System (INIS)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-01-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible

  1. Dynamics of neural cryptography

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  2. ANT Advanced Neural Tool

    Energy Technology Data Exchange (ETDEWEB)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-07-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs.

  3. ANT Advanced Neural Tool

    International Nuclear Information System (INIS)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-01-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs

  4. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  5. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  6. Active Neural Localization

    OpenAIRE

    Chaplot, Devendra Singh; Parisotto, Emilio; Salakhutdinov, Ruslan

    2018-01-01

    Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose "Active Neural Localizer", a fully differentiable neural network that learns to localize accurately and efficiently. The proposed model incorporates ideas of tradition...

  7. Neural cryptography with feedback.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Shacham, Lanir; Kanter, Ido

    2004-04-01

    Neural cryptography is based on a competition between attractive and repulsive stochastic forces. A feedback mechanism is added to neural cryptography which increases the repulsive forces. Using numerical simulations and an analytic approach, the probability of a successful attack is calculated for different model parameters. Scaling laws are derived which show that feedback improves the security of the system. In addition, a network with feedback generates a pseudorandom bit sequence which can be used to encrypt and decrypt a secret message.

  8. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  9. Neural Architectures for Control

    Science.gov (United States)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  10. Sacred or Neural?

    DEFF Research Database (Denmark)

    Runehov, Anne Leona Cesarine

    Are religious spiritual experiences merely the product of the human nervous system? Anne L.C. Runehov investigates the potential of contemporary neuroscience to explain religious experiences. Following the footsteps of Michael Persinger, Andrew Newberg and Eugene d'Aquili she defines...... the terminological bounderies of "religious experiences" and explores the relevant criteria for the proper evaluation of scientific research, with a particular focus on the validity of reductionist models. Runehov's theis is that the perspectives looked at do not necessarily exclude each other but can be merged....... The question "sacred or neural?" becomes a statement "sacred and neural". The synergies thus produced provide manifold opportunities for interdisciplinary dialogue and research....

  11. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  12. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  13. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  14. Neural correlates of consciousness

    African Journals Online (AJOL)

    neural cells.1 Under this approach, consciousness is believed to be a product of the ... possible only when the 40 Hz electrical hum is sustained among the brain circuits, ... expect the brain stem ascending reticular activating system. (ARAS) and the ... related synchrony of cortical neurons.11 Indeed, stimulation of brainstem ...

  15. Neural Networks and Micromechanics

    Science.gov (United States)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  16. Introduction to neural networks

    International Nuclear Information System (INIS)

    Pavlopoulos, P.

    1996-01-01

    This lecture is a presentation of today's research in neural computation. Neural computation is inspired by knowledge from neuro-science. It draws its methods in large degree from statistical physics and its potential applications lie mainly in computer science and engineering. Neural networks models are algorithms for cognitive tasks, such as learning and optimization, which are based on concepts derived from research into the nature of the brain. The lecture first gives an historical presentation of neural networks development and interest in performing complex tasks. Then, an exhaustive overview of data management and networks computation methods is given: the supervised learning and the associative memory problem, the capacity of networks, the Perceptron networks, the functional link networks, the Madaline (Multiple Adalines) networks, the back-propagation networks, the reduced coulomb energy (RCE) networks, the unsupervised learning and the competitive learning and vector quantization. An example of application in high energy physics is given with the trigger systems and track recognition system (track parametrization, event selection and particle identification) developed for the CPLEAR experiment detectors from the LEAR at CERN. (J.S.). 56 refs., 20 figs., 1 tab., 1 appendix

  17. Learning from neural control.

    Science.gov (United States)

    Wang, Cong; Hill, David J

    2006-01-01

    One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach.

  18. Neural systems for control

    National Research Council Canada - National Science Library

    Omidvar, Omid; Elliott, David L

    1997-01-01

    ... is reprinted with permission from A. Barto, "Reinforcement Learning," Handbook of Brain Theory and Neural Networks, M.A. Arbib, ed.. The MIT Press, Cambridge, MA, pp. 804-809, 1995. Chapter 4, Figures 4-5 and 7-9 and Tables 2-5, are reprinted with permission, from S. Cho, "Map Formation in Proprioceptive Cortex," International Jour...

  19. Neural underpinnings of music

    DEFF Research Database (Denmark)

    Vuust, Peter; Gebauer, Line K; Witek, Maria A G

    2014-01-01

    . According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Fourth, empirical studies of neural and behavioral effects of syncopation, polyrhythm and groove will be reported, and we...

  20. Bioprinting for Neural Tissue Engineering.

    Science.gov (United States)

    Knowlton, Stephanie; Anand, Shivesh; Shah, Twisha; Tasoglu, Savas

    2018-01-01

    Bioprinting is a method by which a cell-encapsulating bioink is patterned to create complex tissue architectures. Given the potential impact of this technology on neural research, we review the current state-of-the-art approaches for bioprinting neural tissues. While 2D neural cultures are ubiquitous for studying neural cells, 3D cultures can more accurately replicate the microenvironment of neural tissues. By bioprinting neuronal constructs, one can precisely control the microenvironment by specifically formulating the bioink for neural tissues, and by spatially patterning cell types and scaffold properties in three dimensions. We review a range of bioprinted neural tissue models and discuss how they can be used to observe how neurons behave, understand disease processes, develop new therapies and, ultimately, design replacement tissues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Analysis of neural data

    CERN Document Server

    Kass, Robert E; Brown, Emery N

    2014-01-01

    Continual improvements in data collection and processing have had a huge impact on brain research, producing data sets that are often large and complicated. By emphasizing a few fundamental principles, and a handful of ubiquitous techniques, Analysis of Neural Data provides a unified treatment of analytical methods that have become essential for contemporary researchers. Throughout the book ideas are illustrated with more than 100 examples drawn from the literature, ranging from electrophysiology, to neuroimaging, to behavior. By demonstrating the commonality among various statistical approaches the authors provide the crucial tools for gaining knowledge from diverse types of data. Aimed at experimentalists with only high-school level mathematics, as well as computationally-oriented neuroscientists who have limited familiarity with statistics, Analysis of Neural Data serves as both a self-contained introduction and a reference work.

  2. Deep Neural Yodelling

    OpenAIRE

    Pfäffli, Daniel (Autor/in)

    2018-01-01

    Yodel music differs from most other genres by exercising the transition from chest voice to falsetto with an audible glottal stop which is recognised even by laymen. Yodel often consists of a yodeller with a choir accompaniment. In Switzerland, it is differentiated between the natural yodel and yodel songs. Today's approaches to music generation with machine learning algorithms are based on neural networks, which are best described by stacked layers of neurons which are connected with neurons...

  3. Neural networks for triggering

    International Nuclear Information System (INIS)

    Denby, B.; Campbell, M.; Bedeschi, F.; Chriss, N.; Bowers, C.; Nesti, F.

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab

  4. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  5. Rotation Invariance Neural Network

    OpenAIRE

    Li, Shiyuan

    2017-01-01

    Rotation invariance and translation invariance have great values in image recognition tasks. In this paper, we bring a new architecture in convolutional neural network (CNN) named cyclic convolutional layer to achieve rotation invariance in 2-D symbol recognition. We can also get the position and orientation of the 2-D symbol by the network to achieve detection purpose for multiple non-overlap target. Last but not least, this architecture can achieve one-shot learning in some cases using thos...

  6. Neural Mechanisms of Foraging

    OpenAIRE

    Kolling, Nils; Behrens, Timothy EJ; Mars, Rogier B; Rushworth, Matthew FS

    2012-01-01

    Behavioural economic studies, involving limited numbers of choices, have provided key insights into neural decision-making mechanisms. By contrast, animals’ foraging choices arise in the context of sequences of encounters with prey/food. On each encounter the animal chooses to engage or whether the environment is sufficiently rich that searching elsewhere is merited. The cost of foraging is also critical. We demonstrate humans can alternate between two modes of choice, comparative decision-ma...

  7. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  8. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  9. Optics in neural computation

    Science.gov (United States)

    Levene, Michael John

    In all attempts to emulate the considerable powers of the brain, one is struck by both its immense size, parallelism, and complexity. While the fields of neural networks, artificial intelligence, and neuromorphic engineering have all attempted oversimplifications on the considerable complexity, all three can benefit from the inherent scalability and parallelism of optics. This thesis looks at specific aspects of three modes in which optics, and particularly volume holography, can play a part in neural computation. First, holography serves as the basis of highly-parallel correlators, which are the foundation of optical neural networks. The huge input capability of optical neural networks make them most useful for image processing and image recognition and tracking. These tasks benefit from the shift invariance of optical correlators. In this thesis, I analyze the capacity of correlators, and then present several techniques for controlling the amount of shift invariance. Of particular interest is the Fresnel correlator, in which the hologram is displaced from the Fourier plane. In this case, the amount of shift invariance is limited not just by the thickness of the hologram, but by the distance of the hologram from the Fourier plane. Second, volume holography can provide the huge storage capacity and high speed, parallel read-out necessary to support large artificial intelligence systems. However, previous methods for storing data in volume holograms have relied on awkward beam-steering or on as-yet non- existent cheap, wide-bandwidth, tunable laser sources. This thesis presents a new technique, shift multiplexing, which is capable of very high densities, but which has the advantage of a very simple implementation. In shift multiplexing, the reference wave consists of a focused spot a few millimeters in front of the hologram. Multiplexing is achieved by simply translating the hologram a few tens of microns or less. This thesis describes the theory for how shift

  10. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  11. Neural Synchronization and Cryptography

    Science.gov (United States)

    Ruttor, Andreas

    2007-11-01

    Neural networks can synchronize by learning from each other. In the case of discrete weights full synchronization is achieved in a finite number of steps. Additional networks can be trained by using the inputs and outputs generated during this process as examples. Several learning rules for both tasks are presented and analyzed. In the case of Tree Parity Machines synchronization is much faster than learning. Scaling laws for the number of steps needed for full synchronization and successful learning are derived using analytical models. They indicate that the difference between both processes can be controlled by changing the synaptic depth. In the case of bidirectional interaction the synchronization time increases proportional to the square of this parameter, but it grows exponentially, if information is transmitted in one direction only. Because of this effect neural synchronization can be used to construct a cryptographic key-exchange protocol. Here the partners benefit from mutual interaction, so that a passive attacker is usually unable to learn the generated key in time. The success probabilities of different attack methods are determined by numerical simulations and scaling laws are derived from the data. They show that the partners can reach any desired level of security by just increasing the synaptic depth. Then the complexity of a successful attack grows exponentially, but there is only a polynomial increase of the effort needed to generate a key. Further improvements of security are possible by replacing the random inputs with queries generated by the partners.

  12. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  13. Neural networks at the Tevatron

    International Nuclear Information System (INIS)

    Badgett, W.; Burkett, K.; Campbell, M.K.; Wu, D.Y.; Bianchin, S.; DeNardi, M.; Pauletta, G.; Santi, L.; Caner, A.; Denby, B.; Haggerty, H.; Lindsey, C.S.; Wainer, N.; Dall'Agata, M.; Johns, K.; Dickson, M.; Stanco, L.; Wyss, J.L.

    1992-10-01

    This paper summarizes neural network applications at the Fermilab Tevatron, including the first online hardware application in high energy physics (muon tracking): the CDF and DO neural network triggers; offline quark/gluon discrimination at CDF; ND a new tool for top to multijets recognition at CDF

  14. Neural Networks for the Beginner.

    Science.gov (United States)

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  15. Neural fields theory and applications

    CERN Document Server

    Graben, Peter; Potthast, Roland; Wright, James

    2014-01-01

    With this book, the editors present the first comprehensive collection in neural field studies, authored by leading scientists in the field - among them are two of the founding-fathers of neural field theory. Up to now, research results in the field have been disseminated across a number of distinct journals from mathematics, computational neuroscience, biophysics, cognitive science and others. Starting with a tutorial for novices in neural field studies, the book comprises chapters on emergent patterns, their phase transitions and evolution, on stochastic approaches, cortical development, cognition, robotics and computation, large-scale numerical simulations, the coupling of neural fields to the electroencephalogram and phase transitions in anesthesia. The intended readership are students and scientists in applied mathematics, theoretical physics, theoretical biology, and computational neuroscience. Neural field theory and its applications have a long-standing tradition in the mathematical and computational ...

  16. Artificial neural networks in NDT

    International Nuclear Information System (INIS)

    Abdul Aziz Mohamed

    2001-01-01

    Artificial neural networks, simply known as neural networks, have attracted considerable interest in recent years largely because of a growing recognition of the potential of these computational paradigms as powerful alternative models to conventional pattern recognition or function approximation techniques. The neural networks approach is having a profound effect on almost all fields, and has been utilised in fields Where experimental inter-disciplinary work is being carried out. Being a multidisciplinary subject with a broad knowledge base, Nondestructive Testing (NDT) or Nondestructive Evaluation (NDE) is no exception. This paper explains typical applications of neural networks in NDT/NDE. Three promising types of neural networks are highlighted, namely, back-propagation, binary Hopfield and Kohonen's self-organising maps. (Author)

  17. Interacting neural networks

    Science.gov (United States)

    Metzler, R.; Kinzel, W.; Kanter, I.

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random.

  18. Neural circuitry and immunity

    Science.gov (United States)

    Pavlov, Valentin A.; Tracey, Kevin J.

    2015-01-01

    Research during the last decade has significantly advanced our understanding of the molecular mechanisms at the interface between the nervous system and the immune system. Insight into bidirectional neuroimmune communication has characterized the nervous system as an important partner of the immune system in the regulation of inflammation. Neuronal pathways, including the vagus nerve-based inflammatory reflex are physiological regulators of immune function and inflammation. In parallel, neuronal function is altered in conditions characterized by immune dysregulation and inflammation. Here, we review these regulatory mechanisms and describe the neural circuitry modulating immunity. Understanding these mechanisms reveals possibilities to use targeted neuromodulation as a therapeutic approach for inflammatory and autoimmune disorders. These findings and current clinical exploration of neuromodulation in the treatment of inflammatory diseases defines the emerging field of Bioelectronic Medicine. PMID:26512000

  19. Neural Darwinism and consciousness.

    Science.gov (United States)

    Seth, Anil K; Baars, Bernard J

    2005-03-01

    Neural Darwinism (ND) is a large scale selectionist theory of brain development and function that has been hypothesized to relate to consciousness. According to ND, consciousness is entailed by reentrant interactions among neuronal populations in the thalamocortical system (the 'dynamic core'). These interactions, which permit high-order discriminations among possible core states, confer selective advantages on organisms possessing them by linking current perceptual events to a past history of value-dependent learning. Here, we assess the consistency of ND with 16 widely recognized properties of consciousness, both physiological (for example, consciousness is associated with widespread, relatively fast, low amplitude interactions in the thalamocortical system), and phenomenal (for example, consciousness involves the existence of a private flow of events available only to the experiencing subject). While no theory accounts fully for all of these properties at present, we find that ND and its recent extensions fare well.

  20. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  1. Artificial Neural Network Analysis System

    Science.gov (United States)

    2001-02-27

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  2. Cooperating attackers in neural cryptography.

    Science.gov (United States)

    Shacham, Lanir N; Klein, Einat; Mislovaty, Rachel; Kanter, Ido; Kinzel, Wolfgang

    2004-06-01

    A successful attack strategy in neural cryptography is presented. The neural cryptosystem, based on synchronization of neural networks by mutual learning, has been recently shown to be secure under different attack strategies. The success of the advanced attacker presented here, called the "majority-flipping attacker," does not decay with the parameters of the model. This attacker's outstanding success is due to its using a group of attackers which cooperate throughout the synchronization process, unlike any other attack strategy known. An analytical description of this attack is also presented, and fits the results of simulations.

  3. Creative-Dynamics Approach To Neural Intelligence

    Science.gov (United States)

    Zak, Michail A.

    1992-01-01

    Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.

  4. Neural components of altruistic punishment

    Directory of Open Access Journals (Sweden)

    Emily eDu

    2015-02-01

    Full Text Available Altruistic punishment, which occurs when an individual incurs a cost to punish in response to unfairness or a norm violation, may play a role in perpetuating cooperation. The neural correlates underlying costly punishment have only recently begun to be explored. Here we review the current state of research on the neural basis of altruism from the perspectives of costly punishment, emphasizing the importance of characterizing elementary neural processes underlying a decision to punish. In particular, we emphasize three cognitive processes that contribute to the decision to altruistically punish in most scenarios: inequity aversion, cost-benefit calculation, and social reference frame to distinguish self from others. Overall, we argue for the importance of understanding the neural correlates of altruistic punishment with respect to the core computations necessary to achieve a decision to punish.

  5. Neural complexity, dissociation, and schizophrenia

    Czech Academy of Sciences Publication Activity Database

    Bob, P.; Šusta, M.; Chládek, Jan; Glaslová, K.; Fedor-Ferybergh, P.

    2007-01-01

    Roč. 13, č. 10 (2007), HY1-5 ISSN 1234-1010 Institutional research plan: CEZ:AV0Z20650511 Keywords : neural complexity * dissociation * schizophrenia Subject RIV: FH - Neurology Impact factor: 1.607, year: 2007

  6. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  7. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  8. Artificial intelligence: Deep neural reasoning

    Science.gov (United States)

    Jaeger, Herbert

    2016-10-01

    The human brain can solve highly abstract reasoning problems using a neural network that is entirely physical. The underlying mechanisms are only partially understood, but an artificial network provides valuable insight. See Article p.471

  9. Optical Neural Network Classifier Architectures

    National Research Council Canada - National Science Library

    Getbehead, Mark

    1998-01-01

    We present an adaptive opto-electronic neural network hardware architecture capable of exploiting parallel optics to realize real-time processing and classification of high-dimensional data for Air...

  10. Memristor-based neural networks

    International Nuclear Information System (INIS)

    Thomas, Andy

    2013-01-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (topical review)

  11. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  12. Implantable Neural Interfaces for Sharks

    Science.gov (United States)

    2007-05-01

    technology for recording and stimulating from the auditory and olfactory sensory nervous systems of the awake, swimming nurse shark , G. cirratum (Figures...overlay of the central nervous system of the nurse shark on a horizontal MR image. Implantable Neural Interfaces for Sharks ...Neural Interfaces for Characterizing Population Responses to Odorants and Electrical Stimuli in the Nurse Shark , Ginglymostoma cirratum.” AChemS Abs

  13. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  14. Neural correlates of hate.

    Directory of Open Access Journals (Sweden)

    Semir Zeki

    Full Text Available In this work, we address an important but unexplored topic, namely the neural correlates of hate. In a block-design fMRI study, we scanned 17 normal human subjects while they viewed the face of a person they hated and also faces of acquaintances for whom they had neutral feelings. A hate score was obtained for the object of hate for each subject and this was used as a covariate in a between-subject random effects analysis. Viewing a hated face resulted in increased activity in the medial frontal gyrus, right putamen, bilaterally in premotor cortex, in the frontal pole and bilaterally in the medial insula. We also found three areas where activation correlated linearly with the declared level of hatred, the right insula, right premotor cortex and the right fronto-medial gyrus. One area of deactivation was found in the right superior frontal gyrus. The study thus shows that there is a unique pattern of activity in the brain in the context of hate. Though distinct from the pattern of activity that correlates with romantic love, this pattern nevertheless shares two areas with the latter, namely the putamen and the insula.

  15. neural control system

    International Nuclear Information System (INIS)

    Elshazly, A.A.E.

    2002-01-01

    Automatic power stabilization control is the desired objective for any reactor operation , especially, nuclear power plants. A major problem in this area is inevitable gap between a real plant ant the theory of conventional analysis and the synthesis of linear time invariant systems. in particular, the trajectory tracking control of a nonlinear plant is a class of problems in which the classical linear transfer function methods break down because no transfer function can represent the system over the entire operating region . there is a considerable amount of research on the model-inverse approach using feedback linearization technique. however, this method requires a prices plant model to implement the exact linearizing feedback, for nuclear reactor systems, this approach is not an easy task because of the uncertainty in the plant parameters and un-measurable state variables . therefore, artificial neural network (ANN) is used either in self-tuning control or in improving the conventional rule-based exper system.the main objective of this thesis is to suggest an ANN, based self-learning controller structure . this method is capable of on-line reinforcement learning and control for a nuclear reactor with a totally unknown dynamics model. previously, researches are based on back- propagation algorithm . back -propagation (BP), fast back -propagation (FBP), and levenberg-marquardt (LM), algorithms are discussed and compared for reinforcement learning. it is found that, LM algorithm is quite superior

  16. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  17. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  18. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  19. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  20. Neural networks in signal processing

    International Nuclear Information System (INIS)

    Govil, R.

    2000-01-01

    Nuclear Engineering has matured during the last decade. In research and design, control, supervision, maintenance and production, mathematical models and theories are used extensively. In all such applications signal processing is embedded in the process. Artificial Neural Networks (ANN), because of their nonlinear, adaptive nature are well suited to such applications where the classical assumptions of linearity and second order Gaussian noise statistics cannot be made. ANN's can be treated as nonparametric techniques, which can model an underlying process from example data. They can also adopt their model parameters to statistical change with time. Algorithms in the framework of Neural Networks in Signal processing have found new applications potentials in the field of Nuclear Engineering. This paper reviews the fundamentals of Neural Networks in signal processing and their applications in tasks such as recognition/identification and control. The topics covered include dynamic modeling, model based ANN's, statistical learning, eigen structure based processing and generalization structures. (orig.)

  1. Principles of neural information processing

    CERN Document Server

    Seelen, Werner v

    2016-01-01

    In this fundamental book the authors devise a framework that describes the working of the brain as a whole. It presents a comprehensive introduction to the principles of Neural Information Processing as well as recent and authoritative research. The books´ guiding principles are the main purpose of neural activity, namely, to organize behavior to ensure survival, as well as the understanding of the evolutionary genesis of the brain. Among the developed principles and strategies belong self-organization of neural systems, flexibility, the active interpretation of the world by means of construction and prediction as well as their embedding into the world, all of which form the framework of the presented description. Since, in brains, their partial self-organization, the lifelong adaptation and their use of various methods of processing incoming information are all interconnected, the authors have chosen not only neurobiology and evolution theory as a basis for the elaboration of such a framework, but also syst...

  2. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  3. Entropy Learning in Neural Network

    Directory of Open Access Journals (Sweden)

    Geok See Ng

    2017-12-01

    Full Text Available In this paper, entropy term is used in the learning phase of a neural network.  As learning progresses, more hidden nodes get into saturation.  The early creation of such hidden nodes may impair generalisation.  Hence entropy approach is proposed to dampen the early creation of such nodes.  The entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes.  At the end of learning, the less important nodes can then be eliminated to reduce the memory requirements of the neural network.

  4. The neural cell adhesion molecule

    DEFF Research Database (Denmark)

    Berezin, V; Bock, E; Poulsen, F M

    2000-01-01

    During the past year, the understanding of the structure and function of neural cell adhesion has advanced considerably. The three-dimensional structures of several of the individual modules of the neural cell adhesion molecule (NCAM) have been determined, as well as the structure of the complex...... between two identical fragments of the NCAM. Also during the past year, a link between homophilic cell adhesion and several signal transduction pathways has been proposed, connecting the event of cell surface adhesion to cellular responses such as neurite outgrowth. Finally, the stimulation of neurite...

  5. Antenna analysis using neural networks

    Science.gov (United States)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  6. Arabic Handwriting Recognition Using Neural Network Classifier

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... an OCR using Neural Network classifier preceded by a set of preprocessing .... Artificial Neural Networks (ANNs), which we adopt in this research, consist of ... advantage and disadvantages of each technique. In [9],. Khemiri ...

  7. Neural overlap in processing music and speech.

    Science.gov (United States)

    Peretz, Isabelle; Vuvan, Dominique; Lagrois, Marie-Élaine; Armony, Jorge L

    2015-03-19

    Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  8. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the neural network attractive. A neural network is an information processing system modeled on the structure of the dynamic process. It can solve the complex/nonlinear problems quickly once trained by operating on problems using an interconnected number...

  9. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  10. Neural overlap in processing music and speech

    Science.gov (United States)

    Peretz, Isabelle; Vuvan, Dominique; Lagrois, Marie-Élaine; Armony, Jorge L.

    2015-01-01

    Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing. PMID:25646513

  11. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  12. Neural network to diagnose lining condition

    Science.gov (United States)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  13. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  14. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  15. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous ... process by training a number of neural networks. .... Matlab® version 6.1 was employed for building principal component ... provide a fair simulation of calibration data set with some degree.

  16. Recycling signals in the neural crest

    OpenAIRE

    Taneyhill, Lisa A.; Bronner-Fraser, Marianne E.

    2006-01-01

    Vertebrate neural crest cells are multipotent and differentiate into structures that include cartilage and the bones of the face, as well as much of the peripheral nervous system. Understanding how different model vertebrates utilize signaling pathways reiteratively during various stages of neural crest formation and differentiation lends insight into human disorders associated with the neural crest.

  17. Recycling signals in the neural crest.

    Science.gov (United States)

    Taneyhill, Lisa A; Bronner-Fraser, Marianne

    2005-01-01

    Vertebrate neural crest cells are multipotent and differentiate into structures that include cartilage and the bones of the face, as well as much of the peripheral nervous system. Understanding how different model vertebrates utilize signaling pathways reiteratively during various stages of neural crest formation and differentiation lends insight into human disorders associated with the neural crest.

  18. Neural chips, neural computers and application in high and superhigh energy physics experiments

    International Nuclear Information System (INIS)

    Nikityuk, N.M.; )

    2001-01-01

    Architecture peculiarity and characteristics of series of neural chips and neural computes used in scientific instruments are considered. Tendency of development and use of them in high energy and superhigh energy physics experiments are described. Comparative data which characterize the efficient use of neural chips for useful event selection, classification elementary particles, reconstruction of tracks of charged particles and for search of hypothesis Higgs particles are given. The characteristics of native neural chips and accelerated neural boards are considered [ru

  19. Medical Imaging with Neural Networks

    International Nuclear Information System (INIS)

    Pattichis, C.; Cnstantinides, A.

    1994-01-01

    The objective of this paper is to provide an overview of the recent developments in the use of artificial neural networks in medical imaging. The areas of medical imaging that are covered include : ultrasound, magnetic resonance, nuclear medicine and radiological (including computerized tomography). (authors)

  20. Optoelectronic Implementation of Neural Networks

    Indian Academy of Sciences (India)

    neural networks, such as learning, adapting and copying by means of parallel ... to provide robust recognition of hand-printed English text. Engine idle and misfiring .... and s represents the bounded activation function of a neuron. It is typically ...

  1. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.

    2000-01-01

    A web-based software model (http://fuzzy.iau.dtu.dk/aphasia.nsf) was developed as an example for classification of aphasia using neural networks. Two multilayer perceptrons were used to classify the type of aphasia (Broca, Wernicke, anomic, global) according to the results in some subtests...

  2. Intelligent neural network diagnostic system

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2010-01-01

    Recently, artificial neural network (ANN) has made a significant mark in the domain of diagnostic applications. Neural networks are used to implement complex non-linear mappings (functions) using simple elementary units interrelated through connections with adaptive weights. The performance of the ANN is mainly depending on their topology structure and weights. Some systems have been developed using genetic algorithm (GA) to optimize the topology of the ANN. But, they suffer from some limitations. They are : (1) The computation time requires for training the ANN several time reaching for the average weight required, (2) Slowness of GA for optimization process and (3) Fitness noise appeared in the optimization of ANN. This research suggests new issues to overcome these limitations for finding optimal neural network architectures to learn particular problems. This proposed methodology is used to develop a diagnostic neural network system. It has been applied for a 600 MW turbo-generator as a case of real complex systems. The proposed system has proved its significant performance compared to two common methods used in the diagnostic applications.

  3. Medical Imaging with Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Pattichis, C [Department of Computer Science, University of Cyprus, Kallipoleos 75, P.O.Box 537, Nicosia (Cyprus); Cnstantinides, A [Department of Electrical Engineering, Imperial College of Science, Technology and Medicine, London SW7 2BT (United Kingdom)

    1994-12-31

    The objective of this paper is to provide an overview of the recent developments in the use of artificial neural networks in medical imaging. The areas of medical imaging that are covered include : ultrasound, magnetic resonance, nuclear medicine and radiological (including computerized tomography). (authors). 61 refs, 4 tabs.

  4. Numerical experiments with neural networks

    International Nuclear Information System (INIS)

    Miranda, Enrique.

    1990-01-01

    Neural networks are highly idealized models which, in spite of their simplicity, reproduce some key features of the real brain. In this paper, they are introduced at a level adequate for an undergraduate computational physics course. Some relevant magnitudes are defined and evaluated numerically for the Hopfield model and a short term memory model. (Author)

  5. Serotonin, neural markers and memory

    Directory of Open Access Journals (Sweden)

    Alfredo eMeneses

    2015-07-01

    Full Text Available Diverse neuropsychiatric disorders present dysfunctional memory and no effective treatment exits for them; likely as result of the absence of neural markers associated to memory. Neurotransmitter systems and signaling pathways have been implicated in memory and dysfunctional memory; however, their role is poorly understood. Hence, neural markers and cerebral functions and dysfunctions are revised. To our knowledge no previous systematic works have been published addressing these issues. The interactions among behavioral tasks, control groups and molecular changes and/or pharmacological effects are mentioned. Neurotransmitter receptors and signaling pathways, during normal and abnormally functioning memory with an emphasis on the behavioral aspects of memory are revised. With focus on serotonin, since as it is a well characterized neurotransmitter, with multiple pharmacological tools, and well characterized downstream signaling in mammals’ species. 5-HT1A, 5-HT4, 5-HT5, 5-HT6 and 5-HT7 receptors as well as SERT (serotonin transporter seem to be useful neural markers and/or therapeutic targets. Certainly, if the mentioned evidence is replicated, then the translatability from preclinical and clinical studies to neural changes might be confirmed. Hypothesis and theories might provide appropriate limits and perspectives of evidence

  6. Neural correlates of viewing paintings

    DEFF Research Database (Denmark)

    Vartanian, Oshin; Skov, Martin

    2014-01-01

    Many studies involving functional magnetic resonance imaging (fMRI) have exposed participants to paintings under varying task demands. To isolate neural systems that are activated reliably across fMRI studies in response to viewing paintings regardless of variation in task demands, a quantitative...

  7. Neural Basis of Visual Distraction

    Science.gov (United States)

    Kim, So-Yeon; Hopfinger, Joseph B.

    2010-01-01

    The ability to maintain focus and avoid distraction by goal-irrelevant stimuli is critical for performing many tasks and may be a key deficit in attention-related problems. Recent studies have demonstrated that irrelevant stimuli that are consciously perceived may be filtered out on a neural level and not cause the distraction triggered by…

  8. Vestibular hearing and neural synchronization.

    Science.gov (United States)

    Emami, Seyede Faranak; Daneshi, Ahmad

    2012-01-01

    Objectives. Vestibular hearing as an auditory sensitivity of the saccule in the human ear is revealed by cervical vestibular evoked myogenic potentials (cVEMPs). The range of the vestibular hearing lies in the low frequency. Also, the amplitude of an auditory brainstem response component depends on the amount of synchronized neural activity, and the auditory nerve fibers' responses have the best synchronization with the low frequency. Thus, the aim of this study was to investigate correlation between vestibular hearing using cVEMPs and neural synchronization via slow wave Auditory Brainstem Responses (sABR). Study Design. This case-control survey was consisted of twenty-two dizzy patients, compared to twenty healthy controls. Methods. Intervention comprised of Pure Tone Audiometry (PTA), Impedance acoustic metry (IA), Videonystagmography (VNG), fast wave ABR (fABR), sABR, and cVEMPs. Results. The affected ears of the dizzy patients had the abnormal findings of cVEMPs (insecure vestibular hearing) and the abnormal findings of sABR (decreased neural synchronization). Comparison of the cVEMPs at affected ears versus unaffected ears and the normal persons revealed significant differences (P < 0.05). Conclusion. Safe vestibular hearing was effective in the improvement of the neural synchronization.

  9. Spin glasses and neural networks

    International Nuclear Information System (INIS)

    Parga, N.; Universidad Nacional de Cuyo, San Carlos de Bariloche

    1989-01-01

    The mean-field theory of spin glass models has been used as a prototype of systems with frustration and disorder. One of the most interesting related systems are models of associative memories. In these lectures we review the main concepts developed to solve the Sherrington-Kirkpatrick model and its application to neural networks. (orig.)

  10. Non-invasive neural stimulation

    Science.gov (United States)

    Tyler, William J.; Sanguinetti, Joseph L.; Fini, Maria; Hool, Nicholas

    2017-05-01

    Neurotechnologies for non-invasively interfacing with neural circuits have been evolving from those capable of sensing neural activity to those capable of restoring and enhancing human brain function. Generally referred to as non-invasive neural stimulation (NINS) methods, these neuromodulation approaches rely on electrical, magnetic, photonic, and acoustic or ultrasonic energy to influence nervous system activity, brain function, and behavior. Evidence that has been surmounting for decades shows that advanced neural engineering of NINS technologies will indeed transform the way humans treat diseases, interact with information, communicate, and learn. The physics underlying the ability of various NINS methods to modulate nervous system activity can be quite different from one another depending on the energy modality used as we briefly discuss. For members of commercial and defense industry sectors that have not traditionally engaged in neuroscience research and development, the science, engineering and technology required to advance NINS methods beyond the state-of-the-art presents tremendous opportunities. Within the past few years alone there have been large increases in global investments made by federal agencies, foundations, private investors and multinational corporations to develop advanced applications of NINS technologies. Driven by these efforts NINS methods and devices have recently been introduced to mass markets via the consumer electronics industry. Further, NINS continues to be explored in a growing number of defense applications focused on enhancing human dimensions. The present paper provides a brief introduction to the field of non-invasive neural stimulation by highlighting some of the more common methods in use or under current development today.

  11. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  12. Parameterization Of Solar Radiation Using Neural Network

    International Nuclear Information System (INIS)

    Jiya, J. D.; Alfa, B.

    2002-01-01

    This paper presents a neural network technique for parameterization of global solar radiation. The available data from twenty-one stations is used for training the neural network and the data from other ten stations is used to validate the neural model. The neural network utilizes latitude, longitude, altitude, sunshine duration and period number to parameterize solar radiation values. The testing data was not used in the training to demonstrate the performance of the neural network in unknown stations to parameterize solar radiation. The results indicate a good agreement between the parameterized solar radiation values and actual measured values

  13. Spike Neural Models Part II: Abstract Neural Models

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2018-02-01

    Full Text Available Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF model which is not biologically realistic but does quickly and easily integrate input to produce spikes. Izhikevich's model is based on Hodgkin-Huxley's model but simplified such that it uses only two differentiation equations and four parameters to produce various realistic spike patterns. LIF is based on a standard electrical circuit and contains one equation. Either of these two models, or any of the many other models in literature can be used in a SNN. Choosing a neural model is an important task that depends on the goal of the research and the resources available. Once a model is chosen, network decisions such as connectivity, delay, and sparseness, need to be made. Understanding neural models and how they are incorporated into the network is the first step in creating a SNN.

  14. Optical resonators and neural networks

    Science.gov (United States)

    Anderson, Dana Z.

    1986-08-01

    It may be possible to implement neural network models using continuous field optical architectures. These devices offer the inherent parallelism of propagating waves and an information density in principle dictated by the wavelength of light and the quality of the bulk optical elements. Few components are needed to construct a relatively large equivalent network. Various associative memories based on optical resonators have been demonstrated in the literature, a ring resonator design is discussed in detail here. Information is stored in a holographic medium and recalled through a competitive processes in the gain medium supplying energy to the ring rsonator. The resonator memory is the first realized example of a neural network function implemented with this kind of architecture.

  15. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  16. IMNN: Information Maximizing Neural Networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  17. Genetic attack on neural cryptography.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido

    2006-03-01

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.

  18. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  19. Genetic attack on neural cryptography

    International Nuclear Information System (INIS)

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido

    2006-01-01

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size

  20. Genetic attack on neural cryptography

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido

    2006-03-01

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.

  1. Scheduling with artificial neural networks

    OpenAIRE

    Gürgün, Burçkaan

    1993-01-01

    Ankara : Department of Industrial Engineering and The Institute of Engineering and Sciences of Bilkent Univ., 1993. Thesis (Master's) -- Bilkent University, 1993. Includes bibliographical references leaves 59-65. Artificial Neural Networks (ANNs) attempt to emulate the massively parallel and distributed processing of the human brain. They are being examined for a variety of problems that have been very difficult to solve. The objective of this thesis is to review the curren...

  2. Handbook on neural information processing

    CERN Document Server

    Maggini, Marco; Jain, Lakhmi

    2013-01-01

    This handbook presents some of the most recent topics in neural information processing, covering both theoretical concepts and practical applications. The contributions include:                         Deep architectures                         Recurrent, recursive, and graph neural networks                         Cellular neural networks                         Bayesian networks                         Approximation capabilities of neural networks                         Semi-supervised learning                         Statistical relational learning                         Kernel methods for structured data                         Multiple classifier systems                         Self organisation and modal learning                         Applications to ...

  3. Deep Gate Recurrent Neural Network

    Science.gov (United States)

    2016-11-22

    and Fred Cummins. Learning to forget: Continual prediction with lstm . Neural computation, 12(10):2451–2471, 2000. Alex Graves. Generating sequences...DSGU) and Simple Gated Unit (SGU), which are structures for learning long-term dependencies. Compared to traditional Long Short-Term Memory ( LSTM ) and...Gated Recurrent Unit (GRU), both structures require fewer parameters and less computation time in sequence classification tasks. Unlike GRU and LSTM

  4. Adaptive Graph Convolutional Neural Networks

    OpenAIRE

    Li, Ruoyu; Wang, Sheng; Zhu, Feiyun; Huang, Junzhou

    2018-01-01

    Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking data of arbitrary graph structure as input. In that way a task-driven adaptive graph is learned for eac...

  5. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  6. Central neural pathways for thermoregulation

    Science.gov (United States)

    Morrison, Shaun F.; Nakamura, Kazuhiro

    2010-01-01

    Central neural circuits orchestrate a homeostatic repertoire to maintain body temperature during environmental temperature challenges and to alter body temperature during the inflammatory response. This review summarizes the functional organization of the neural pathways through which cutaneous thermal receptors alter thermoregulatory effectors: the cutaneous circulation for heat loss, the brown adipose tissue, skeletal muscle and heart for thermogenesis and species-dependent mechanisms (sweating, panting and saliva spreading) for evaporative heat loss. These effectors are regulated by parallel but distinct, effector-specific neural pathways that share a common peripheral thermal sensory input. The thermal afferent circuits include cutaneous thermal receptors, spinal dorsal horn neurons and lateral parabrachial nucleus neurons projecting to the preoptic area to influence warm-sensitive, inhibitory output neurons which control thermogenesis-promoting neurons in the dorsomedial hypothalamus that project to premotor neurons in the rostral ventromedial medulla, including the raphe pallidus, that descend to provide the excitation necessary to drive thermogenic thermal effectors. A distinct population of warm-sensitive preoptic neurons controls heat loss through an inhibitory input to raphe pallidus neurons controlling cutaneous vasoconstriction. PMID:21196160

  7. Adaptive competitive learning neural networks

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2013-11-01

    Full Text Available In this paper, the adaptive competitive learning (ACL neural network algorithm is proposed. This neural network not only groups similar input feature vectors together but also determines the appropriate number of groups of these vectors. This algorithm uses a new proposed criterion referred to as the ACL criterion. This criterion evaluates different clustering structures produced by the ACL neural network for an input data set. Then, it selects the best clustering structure and the corresponding network architecture for this data set. The selected structure is composed of the minimum number of clusters that are compact and balanced in their sizes. The selected network architecture is efficient, in terms of its complexity, as it contains the minimum number of neurons. Synaptic weight vectors of these neurons represent well-separated, compact and balanced clusters in the input data set. The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters. Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

  8. Neural mechanisms of social dominance

    Science.gov (United States)

    Watanabe, Noriya; Yamamoto, Miyuki

    2015-01-01

    In a group setting, individuals' perceptions of their own level of dominance or of the dominance level of others, and the ability to adequately control their behavior based on these perceptions are crucial for living within a social environment. Recent advances in neural imaging and molecular technology have enabled researchers to investigate the neural substrates that support the perception of social dominance and the formation of a social hierarchy in humans. At the systems' level, recent studies showed that dominance perception is represented in broad brain regions which include the amygdala, hippocampus, striatum, and various cortical networks such as the prefrontal, and parietal cortices. Additionally, neurotransmitter systems such as the dopaminergic and serotonergic systems, modulate and are modulated by the formation of the social hierarchy in a group. While these monoamine systems have a wide distribution and multiple functions, it was recently found that the Neuropeptide B/W contributes to the perception of dominance and is present in neurons that have a limited projection primarily to the amygdala. The present review discusses the specific roles of these neural regions and neurotransmitter systems in the perception of dominance and in hierarchy formation. PMID:26136644

  9. Neural mechanisms of social dominance

    Directory of Open Access Journals (Sweden)

    Noriya eWatanabe

    2015-06-01

    Full Text Available In a group setting, individuals’ perceptions of their own level of dominance or of the dominance level of others, and the ability to adequately control their behavior based on these perceptions are crucial for living within a social environment. Recent advances in neural imaging and molecular technology have enabled researchers to investigate the neural substrates that support the perception of social dominance and the formation of a social hierarchy in humans. At the systems’ level, recent studies showed that dominance perception is represented in broad brain regions which include the amygdala, hippocampus, striatum, and various cortical networks such as the prefrontal, and parietal cortices. Additionally, neurotransmitter systems such as the dopaminergic and serotonergic systems, modulate and are modulated by the formation of the social hierarchy in a group. While these monoamine systems have a wide distribution and multiple functions, it was recently found that the Neuropeptide B/W contributes to the perception of dominance and is present in neurons that have a limited projection primarily to the amygdala. The present review discusses the specific roles of these neural regions and neurotransmitter systems in the perception of dominance and in hierarchy formation.

  10. Neural Representations of Physics Concepts.

    Science.gov (United States)

    Mason, Robert A; Just, Marcel Adam

    2016-06-01

    We used functional MRI (fMRI) to assess neural representations of physics concepts (momentum, energy, etc.) in juniors, seniors, and graduate students majoring in physics or engineering. Our goal was to identify the underlying neural dimensions of these representations. Using factor analysis to reduce the number of dimensions of activation, we obtained four physics-related factors that were mapped to sets of voxels. The four factors were interpretable as causal motion visualization, periodicity, algebraic form, and energy flow. The individual concepts were identifiable from their fMRI signatures with a mean rank accuracy of .75 using a machine-learning (multivoxel) classifier. Furthermore, there was commonality in participants' neural representation of physics; a classifier trained on data from all but one participant identified the concepts in the left-out participant (mean accuracy = .71 across all nine participant samples). The findings indicate that abstract scientific concepts acquired in an educational setting evoke activation patterns that are identifiable and common, indicating that science education builds abstract knowledge using inherent, repurposed brain systems. © The Author(s) 2016.

  11. Photon spectrometry utilizing neural networks

    International Nuclear Information System (INIS)

    Silveira, R.; Benevides, C.; Lima, F.; Vilela, E.

    2015-01-01

    Having in mind the time spent on the uneventful work of characterization of the radiation beams used in a ionizing radiation metrology laboratory, the Metrology Service of the Centro Regional de Ciencias Nucleares do Nordeste - CRCN-NE verified the applicability of artificial intelligence (artificial neural networks) to perform the spectrometry in photon fields. For this, was developed a multilayer neural network, as an application for the classification of patterns in energy, associated with a thermoluminescent dosimetric system (TLD-700 and TLD-600). A set of dosimeters was initially exposed to various well known medium energies, between 40 keV and 1.2 MeV, coinciding with the beams determined by ISO 4037 standard, for the dose of 10 mSv in the quantity Hp(10), on a chest phantom (ISO slab phantom) with the purpose of generating a set of training data for the neural network. Subsequently, a new set of dosimeters irradiated in unknown energies was presented to the network with the purpose to test the method. The methodology used in this work was suitable for application in the classification of energy beams, having obtained 100% of the classification performed. (authors)

  12. Neural plasticity of development and learning.

    Science.gov (United States)

    Galván, Adriana

    2010-06-01

    Development and learning are powerful agents of change across the lifespan that induce robust structural and functional plasticity in neural systems. An unresolved question in developmental cognitive neuroscience is whether development and learning share the same neural mechanisms associated with experience-related neural plasticity. In this article, I outline the conceptual and practical challenges of this question, review insights gleaned from adult studies, and describe recent strides toward examining this topic across development using neuroimaging methods. I suggest that development and learning are not two completely separate constructs and instead, that they exist on a continuum. While progressive and regressive changes are central to both, the behavioral consequences associated with these changes are closely tied to the existing neural architecture of maturity of the system. Eventually, a deeper, more mechanistic understanding of neural plasticity will shed light on behavioral changes across development and, more broadly, about the underlying neural basis of cognition. (c) 2010 Wiley-Liss, Inc.

  13. Neurosecurity: security and privacy for neural devices.

    Science.gov (United States)

    Denning, Tamara; Matsuoka, Yoky; Kohno, Tadayoshi

    2009-07-01

    An increasing number of neural implantable devices will become available in the near future due to advances in neural engineering. This discipline holds the potential to improve many patients' lives dramatically by offering improved-and in some cases entirely new-forms of rehabilitation for conditions ranging from missing limbs to degenerative cognitive diseases. The use of standard engineering practices, medical trials, and neuroethical evaluations during the design process can create systems that are safe and that follow ethical guidelines; unfortunately, none of these disciplines currently ensure that neural devices are robust against adversarial entities trying to exploit these devices to alter, block, or eavesdrop on neural signals. The authors define "neurosecurity"-a version of computer science security principles and methods applied to neural engineering-and discuss why neurosecurity should be a critical consideration in the design of future neural devices.

  14. The Neural Border: Induction, Specification and Maturation of the territory that generates Neural Crest cells.

    Science.gov (United States)

    Pla, Patrick; Monsoro-Burq, Anne H

    2018-05-28

    The neural crest is induced at the edge between the neural plate and the nonneural ectoderm, in an area called the neural (plate) border, during gastrulation and neurulation. In recent years, many studies have explored how this domain is patterned, and how the neural crest is induced within this territory, that also participates to the prospective dorsal neural tube, the dorsalmost nonneural ectoderm, as well as placode derivatives in the anterior area. This review highlights the tissue interactions, the cell-cell signaling and the molecular mechanisms involved in this dynamic spatiotemporal patterning, resulting in the induction of the premigratory neural crest. Collectively, these studies allow building a complex neural border and early neural crest gene regulatory network, mostly composed by transcriptional regulations but also, more recently, including novel signaling interactions. Copyright © 2018. Published by Elsevier Inc.

  15. Direct adaptive control using feedforward neural networks

    OpenAIRE

    Cajueiro, Daniel Oliveira; Hemerly, Elder Moreira

    2003-01-01

    ABSTRACT: This paper proposes a new scheme for direct neural adaptive control that works efficiently employing only one neural network, used for simultaneously identifying and controlling the plant. The idea behind this structure of adaptive control is to compensate the control input obtained by a conventional feedback controller. The neural network training process is carried out by using two different techniques: backpropagation and extended Kalman filter algorithm. Additionally, the conver...

  16. Introduction to Concepts in Artificial Neural Networks

    Science.gov (United States)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  17. Mode Choice Modeling Using Artificial Neural Networks

    OpenAIRE

    Edara, Praveen Kumar

    2003-01-01

    Artificial intelligence techniques have produced excellent results in many diverse fields of engineering. Techniques such as neural networks and fuzzy systems have found their way into transportation engineering. In recent years, neural networks are being used instead of regression techniques for travel demand forecasting purposes. The basic reason lies in the fact that neural networks are able to capture complex relationships and learn from examples and also able to adapt when new data becom...

  18. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  19. Neural crest contributions to the lamprey head

    Science.gov (United States)

    McCauley, David W.; Bronner-Fraser, Marianne

    2003-01-01

    The neural crest is a vertebrate-specific cell population that contributes to the facial skeleton and other derivatives. We have performed focal DiI injection into the cranial neural tube of the developing lamprey in order to follow the migratory pathways of discrete groups of cells from origin to destination and to compare neural crest migratory pathways in a basal vertebrate to those of gnathostomes. The results show that the general pathways of cranial neural crest migration are conserved throughout the vertebrates, with cells migrating in streams analogous to the mandibular and hyoid streams. Caudal branchial neural crest cells migrate ventrally as a sheet of cells from the hindbrain and super-pharyngeal region of the neural tube and form a cylinder surrounding a core of mesoderm in each pharyngeal arch, similar to that seen in zebrafish and axolotl. In addition to these similarities, we also uncovered important differences. Migration into the presumptive caudal branchial arches of the lamprey involves both rostral and caudal movements of neural crest cells that have not been described in gnathostomes, suggesting that barriers that constrain rostrocaudal movement of cranial neural crest cells may have arisen after the agnathan/gnathostome split. Accordingly, neural crest cells from a single axial level contributed to multiple arches and there was extensive mixing between populations. There was no apparent filling of neural crest derivatives in a ventral-to-dorsal order, as has been observed in higher vertebrates, nor did we find evidence of a neural crest contribution to cranial sensory ganglia. These results suggest that migratory constraints and additional neural crest derivatives arose later in gnathostome evolution.

  20. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  1. The quest for a Quantum Neural Network

    OpenAIRE

    Schuld, M.; Sinayskiy, I.; Petruccione, F.

    2014-01-01

    With the overwhelming success in the field of quantum information in the last decades, the "quest" for a Quantum Neural Network (QNN) model began in order to combine quantum computing with the striking properties of neural computing. This article presents a systematic approach to QNN research, which so far consists of a conglomeration of ideas and proposals. It outlines the challenge of combining the nonlinear, dissipative dynamics of neural computing and the linear, unitary dynamics of quant...

  2. NeuroMEMS: Neural Probe Microtechnologies

    Directory of Open Access Journals (Sweden)

    Sam Musallam

    2008-10-01

    Full Text Available Neural probe technologies have already had a significant positive effect on our understanding of the brain by revealing the functioning of networks of biological neurons. Probes are implanted in different areas of the brain to record and/or stimulate specific sites in the brain. Neural probes are currently used in many clinical settings for diagnosis of brain diseases such as seizers, epilepsy, migraine, Alzheimer’s, and dementia. We find these devices assisting paralyzed patients by allowing them to operate computers or robots using their neural activity. In recent years, probe technologies were assisted by rapid advancements in microfabrication and microelectronic technologies and thus are enabling highly functional and robust neural probes which are opening new and exciting avenues in neural sciences and brain machine interfaces. With a wide variety of probes that have been designed, fabricated, and tested to date, this review aims to provide an overview of the advances and recent progress in the microfabrication techniques of neural probes. In addition, we aim to highlight the challenges faced in developing and implementing ultralong multi-site recording probes that are needed to monitor neural activity from deeper regions in the brain. Finally, we review techniques that can improve the biocompatibility of the neural probes to minimize the immune response and encourage neural growth around the electrodes for long term implantation studies.

  3. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  4. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  5. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  6. The LILARTI neural network system

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  7. Finite connectivity attractor neural networks

    International Nuclear Information System (INIS)

    Wemmenhove, B; Coolen, A C C

    2003-01-01

    We study a family of diluted attractor neural networks with a finite average number of (symmetric) connections per neuron. As in finite connectivity spin glasses, their equilibrium properties are described by order parameter functions, for which we derive an integral equation in replica symmetric approximation. A bifurcation analysis of this equation reveals the locations of the paramagnetic to recall and paramagnetic to spin-glass transition lines in the phase diagram. The line separating the retrieval phase from the spin-glass phase is calculated at zero temperature. All phase transitions are found to be continuous

  8. Neural Network Based Load Frequency Control for Restructuring ...

    African Journals Online (AJOL)

    Neural Network Based Load Frequency Control for Restructuring Power Industry. ... an artificial neural network (ANN) application of load frequency control (LFC) of a Multi-Area power system by using a neural network controller is presented.

  9. In-vitro differentiation induction of neural stem cells

    NARCIS (Netherlands)

    Balasubramaniyan, Veerakumar

    2006-01-01

    Neurale stamcellen maken de drie belangrijkste celtypes van ons zenuwstelsel aan. Veerakumar Balasubramaniyan onderzocht hoe neurale stamcellen kunnen worden aangezet tot het aanmaken van specifieke neurale celtypes. Met behulp van genetische technieken lukte het hem oligodendrocyten te verkrijgen:

  10. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  11. Imaging Posture Veils Neural Signals

    Directory of Open Access Journals (Sweden)

    Robert T Thibault

    2016-10-01

    Full Text Available Whereas modern brain imaging often demands holding body positions incongruent with everyday life, posture governs both neural activity and cognitive performance. Humans commonly perform while upright; yet, many neuroimaging methodologies require participants to remain motionless and adhere to non-ecological comportments within a confined space. This inconsistency between ecological postures and imaging constraints undermines the transferability and generalizability of many a neuroimaging assay.Here we highlight the influence of posture on brain function and behavior. Specifically, we challenge the tacit assumption that brain processes and cognitive performance are comparable across a spectrum of positions. We provide an integrative synthesis regarding the increasingly prominent influence of imaging postures on autonomic function, mental capacity, sensory thresholds, and neural activity. Arguing that neuroimagers and cognitive scientists could benefit from considering the influence posture wields on both general functioning and brain activity, we examine existing imaging technologies and the potential of portable and versatile imaging devices (e.g., functional near infrared spectroscopy. Finally, we discuss ways that accounting for posture may help unveil the complex brain processes of everyday cognition.

  12. Learning in Artificial Neural Systems

    Science.gov (United States)

    Matheus, Christopher J.; Hohensee, William E.

    1987-01-01

    This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined.

  13. Neural dynamics in reconfigurable silicon.

    Science.gov (United States)

    Basu, A; Ramakrishnan, S; Petre, C; Koziol, S; Brink, S; Hasler, P E

    2010-10-01

    A neuromorphic analog chip is presented that is capable of implementing massively parallel neural computations while retaining the programmability of digital systems. We show measurements from neurons with Hopf bifurcations and integrate and fire neurons, excitatory and inhibitory synapses, passive dendrite cables, coupled spiking neurons, and central pattern generators implemented on the chip. This chip provides a platform for not only simulating detailed neuron dynamics but also uses the same to interface with actual cells in applications such as a dynamic clamp. There are 28 computational analog blocks (CAB), each consisting of ion channels with tunable parameters, synapses, winner-take-all elements, current sources, transconductance amplifiers, and capacitors. There are four other CABs which have programmable bias generators. The programmability is achieved using floating gate transistors with on-chip programming control. The switch matrix for interconnecting the components in CABs also consists of floating-gate transistors. Emphasis is placed on replicating the detailed dynamics of computational neural models. Massive computational area efficiency is obtained by using the reconfigurable interconnect as synaptic weights, resulting in more than 50 000 possible 9-b accurate synapses in 9 mm(2).

  14. Chimera States in Neural Oscillators

    Science.gov (United States)

    Bahar, Sonya; Glaze, Tera

    2014-03-01

    Chimera states have recently been explored both theoretically and experimentally, in various coupled nonlinear oscillators, ranging from phase-oscillator models to coupled chemical reactions. In a chimera state, both coherent and incoherent (or synchronized and desynchronized) states occur simultaneously in populations of identical oscillators. We investigate chimera behavior in a population of neural oscillators using the Huber-Braun model, a Hodgkin-Huxley-like model originally developed to characterize the temperature-dependent bursting behavior of mammalian cold receptors. One population of neurons is allowed to synchronize, with each neuron receiving input from all the others in its group (global within-group coupling). Subsequently, a second population of identical neurons is placed under an identical global within-group coupling, and the two populations are also coupled to each other (between-group coupling). For certain values of the coupling constants, the neurons in the two populations exhibit radically different synchronization behavior. We will discuss the range of chimera activity in the model, and discuss its implications for actual neural activity, such as unihemispheric sleep.

  15. Improved transformer protection using probabilistic neural network ...

    African Journals Online (AJOL)

    user

    secure and dependable protection for power transformers. Owing to its superior learning and generalization capabilities Artificial. Neural Network (ANN) can considerably enhance the scope of WI method. ANN approach is faster, robust and easier to implement than the conventional waveform approach. The use of neural ...

  16. Neural network signal understanding for instrumentation

    DEFF Research Database (Denmark)

    Pau, L. F.; Johansen, F. S.

    1990-01-01

    understanding research is surveyed, and the selected implementation and its performance in terms of correct classification rates and robustness to noise are described. Formal results on neural net training time and sensitivity to weights are given. A theory for neural control using functional link nets is given...

  17. A Chip for an Implantable Neural Stimulator

    DEFF Research Database (Denmark)

    Gudnason, Gunnar; Bruun, Erik; Haugland, Morten

    2000-01-01

    This paper describes a chip for a multichannel neural stimulator for functional electrical stimulation (FES). The purpose of FES is to restore muscular control in disabled patients. The chip performs all the signal processing required in an implanted neural stimulator. The power and digital data...

  18. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  19. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  20. A high-speed analog neural processor

    NARCIS (Netherlands)

    Masa, P.; Masa, Peter; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    Targeted at high-energy physics research applications, our special-purpose analog neural processor can classify up to 70 dimensional vectors within 50 nanoseconds. The decision-making process of the implemented feedforward neural network enables this type of computation to tolerate weight

  1. Neurophysiology and neural engineering: a review.

    Science.gov (United States)

    Prochazka, Arthur

    2017-08-01

    Neurophysiology is the branch of physiology concerned with understanding the function of neural systems. Neural engineering (also known as neuroengineering) is a discipline within biomedical engineering that uses engineering techniques to understand, repair, replace, enhance, or otherwise exploit the properties and functions of neural systems. In most cases neural engineering involves the development of an interface between electronic devices and living neural tissue. This review describes the origins of neural engineering, the explosive development of methods and devices commencing in the late 1950s, and the present-day devices that have resulted. The barriers to interfacing electronic devices with living neural tissues are many and varied, and consequently there have been numerous stops and starts along the way. Representative examples are discussed. None of this could have happened without a basic understanding of the relevant neurophysiology. I also consider examples of how neural engineering is repaying the debt to basic neurophysiology with new knowledge and insight. Copyright © 2017 the American Physiological Society.

  2. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  3. Interpretable neural networks with BP-SOM

    NARCIS (Netherlands)

    Weijters, A.J.M.M.; Bosch, van den A.P.J.; Pobil, del A.P.; Mira, J.; Ali, M.

    1998-01-01

    Artificial Neural Networks (ANNS) are used successfully in industry and commerce. This is not surprising since neural networks are especially competitive for complex tasks for which insufficient domain-specific knowledge is available. However, interpretation of models induced by ANNS is often

  4. Deciphering the Cognitive and Neural Mechanisms Underlying ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Deciphering the Cognitive and Neural Mechanisms Underlying Auditory Learning. This project seeks to understand the brain mechanisms necessary for people to learn to perceive sounds. Neural circuits and learning. The research team will test people with and without musical training to evaluate their capacity to learn ...

  5. The neural network approach to parton fitting

    International Nuclear Information System (INIS)

    Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea

    2005-01-01

    We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits

  6. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    OpenAIRE

    Jerzy Balicki; Piotr Dryja; Waldemar Korłub; Piotr Przybyłek; Maciej Tyszka; Marcin Zadroga; Marcin Zakidalski

    2016-01-01

    Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  7. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2016-06-01

    Full Text Available Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  8. Neural Network to Solve Concave Games

    OpenAIRE

    Liu, Zixin; Wang, Nengfa

    2014-01-01

    The issue on neural network method to solve concave games is concerned. Combined with variational inequality, Ky Fan inequality, and projection equation, concave games are transformed into a neural network model. On the basis of the Lyapunov stable theory, some stability results are also given. Finally, two classic games’ simulation results are given to illustrate the theoretical results.

  9. Neural Network Algorithm for Particle Loading

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given

  10. Neural constructivism or self-organization?

    NARCIS (Netherlands)

    van der Maas, H.L.J.; Molenaar, P.C.M.

    2000-01-01

    Comments on the article by S. R. Quartz et al (see record 1998-00749-001) which discussed the constructivist perspective of interaction between cognition and neural processes during development and consequences for theories of learning. Three arguments are given to show that neural constructivism

  11. Memory in Neural Networks and Glasses

    NARCIS (Netherlands)

    Heerema, M.

    2000-01-01

    The thesis tries and models a neural network in a way which, at essential points, is biologically realistic. In a biological context, the changes of the synapses of the neural network are most often described by what is called `Hebb's learning rule'. On careful analysis it is, in fact, nothing but a

  12. Windowed active sampling for reliable neural learning

    NARCIS (Netherlands)

    Barakova, E.I; Spaanenburg, L

    The composition of the example set has a major impact on the quality of neural learning. The popular approach is focused on extensive pre-processing to bridge the representation gap between process measurement and neural presentation. In contrast, windowed active sampling attempts to solve these

  13. Neural Control of the Immune System

    Science.gov (United States)

    Sundman, Eva; Olofsson, Peder S.

    2014-01-01

    Neural reflexes support homeostasis by modulating the function of organ systems. Recent advances in neuroscience and immunology have revealed that neural reflexes also regulate the immune system. Activation of the vagus nerve modulates leukocyte cytokine production and alleviates experimental shock and autoimmune disease, and recent data have…

  14. A new perspective on behavioral inconsistency and neural noise in aging: Compensatory speeding of neural communication

    Directory of Open Access Journals (Sweden)

    S. Lee Hong

    2012-09-01

    Full Text Available This paper seeks to present a new perspective on the aging brain. Here, we make connections between two key phenomena of brain aging: 1 increased neural noise or random background activity; and 2 slowing of brain activity. Our perspective proposes the possibility that the slowing of neural processing due to decreasing nerve conduction velocities leads to a compensatory speeding of neuron firing rates. These increased firing rates lead to a broader distribution of power in the frequency spectrum of neural oscillations, which we propose, can just as easily be interpreted as neural noise. Compensatory speeding of neural activity, as we present, is constrained by the: A availability of metabolic energy sources; and B competition for frequency bandwidth needed for neural communication. We propose that these constraints lead to the eventual inability to compensate for age-related declines in neural function that are manifested clinically as deficits in cognition, affect, and motor behavior.

  15. 22nd Italian Workshop on Neural Nets

    CERN Document Server

    Bassis, Simone; Esposito, Anna; Morabito, Francesco

    2013-01-01

    This volume collects a selection of contributions which has been presented at the 22nd Italian Workshop on Neural Networks, the yearly meeting of the Italian Society for Neural Networks (SIREN). The conference was held in Italy, Vietri sul Mare (Salerno), during May 17-19, 2012. The annual meeting of SIREN is sponsored by International Neural Network Society (INNS), European Neural Network Society (ENNS) and IEEE Computational Intelligence Society (CIS). The book – as well as the workshop-  is organized in three main components, two special sessions and a group of regular sessions featuring different aspects and point of views of artificial neural networks and natural intelligence, also including applications of present compelling interest.

  16. Dynamic decomposition of spatiotemporal neural signals.

    Directory of Open Access Journals (Sweden)

    Luca Ambrogioni

    2017-05-01

    Full Text Available Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals.

  17. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  18. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  19. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  20. Decentralized neural control application to robotics

    CERN Document Server

    Garcia-Hernandez, Ramon; Sanchez, Edgar N; Alanis, Alma y; Ruz-Hernandez, Jose A

    2017-01-01

    This book provides a decentralized approach for the identification and control of robotics systems. It also presents recent research in decentralized neural control and includes applications to robotics. Decentralized control is free from difficulties due to complexity in design, debugging, data gathering and storage requirements, making it preferable for interconnected systems. Furthermore, as opposed to the centralized approach, it can be implemented with parallel processors. This approach deals with four decentralized control schemes, which are able to identify the robot dynamics. The training of each neural network is performed on-line using an extended Kalman filter (EKF). The first indirect decentralized control scheme applies the discrete-time block control approach, to formulate a nonlinear sliding manifold. The second direct decentralized neural control scheme is based on the backstepping technique, approximated by a high order neural network. The third control scheme applies a decentralized neural i...

  1. Neural neworks in a management information systems

    Directory of Open Access Journals (Sweden)

    Jana Weinlichová

    2009-01-01

    Full Text Available For having retrospection for all over the data which are used, analyzed, evaluated and for a future incident predictions are used Management Information Systems and Business Intelligence. In case of not to be able to apply standard methods of data processing there can be with benefit applied an Artificial Intelligence. In this article will be referred to proofed abilities of Neural Networks. The Neural Networks is supported by many software products related to provide effective solution of manager issues. Those products are given as primary support for manager issues solving. We were tried to find reciprocally between products using Neural Networks and between Management Information Systems for finding a real possibility of applying Neural Networks as a direct part of Management Information Systems (MIS. In the article are presented possibilities to apply Neural Networks on different types of tasks in MIS.

  2. Noradrenergic modulation of neural erotic stimulus perception.

    Science.gov (United States)

    Graf, Heiko; Wiegers, Maike; Metzger, Coraline Danielle; Walter, Martin; Grön, Georg; Abler, Birgit

    2017-09-01

    We recently investigated neuromodulatory effects of the noradrenergic agent reboxetine and the dopamine receptor affine amisulpride in healthy subjects on dynamic erotic stimulus processing. Whereas amisulpride left sexual functions and neural activations unimpaired, we observed detrimental activations under reboxetine within the caudate nucleus corresponding to motivational components of sexual behavior. However, broadly impaired subjective sexual functioning under reboxetine suggested effects on further neural components. We now investigated the same sample under these two agents with static erotic picture stimulation as alternative stimulus presentation mode to potentially observe further neural treatment effects of reboxetine. 19 healthy males were investigated under reboxetine, amisulpride and placebo for 7 days each within a double-blind cross-over design. During fMRI static erotic picture were presented with preceding anticipation periods. Subjective sexual functions were assessed by a self-reported questionnaire. Neural activations were attenuated within the caudate nucleus, putamen, ventral striatum, the pregenual and anterior midcingulate cortex and in the orbitofrontal cortex under reboxetine. Subjective diminished sexual arousal under reboxetine was correlated with attenuated neural reactivity within the posterior insula. Again, amisulpride left neural activations along with subjective sexual functioning unimpaired. Neither reboxetine nor amisulpride altered differential neural activations during anticipation of erotic stimuli. Our results verified detrimental effects of noradrenergic agents on neural motivational but also emotional and autonomic components of sexual behavior. Considering the overlap of neural network alterations with those evoked by serotonergic agents, our results suggest similar neuromodulatory effects of serotonergic and noradrenergic agents on common neural pathways relevant for sexual behavior. Copyright © 2017 Elsevier B.V. and

  3. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    are examined. The models are separated into three groups representing input/output descriptions as well as state space descriptions: - Models, where all in- and outputs are measurable (static networks). - Models, where some inputs are non-measurable (recurrent networks). - Models, where some in- and some...... outputs are non-measurable (recurrent networks with incomplete state information). The three groups are ordered in increasing complexity, and for each group it is shown how to solve the problems concerning training and application of the specific model type. Of particular interest are the model types...... Kalmann filter) representing state space description. The potentials of neural networks for control of non-linear processes are also examined, focusing on three different groups of control concepts, all considered as generalizations of known linear control concepts to handle also non-linear processes...

  4. Primary neural leprosy: systematic review

    Directory of Open Access Journals (Sweden)

    Jose Antonio Garbino

    2013-06-01

    Full Text Available The authors proposed a systematic review on the current concepts of primary neural leprosy by consulting the following online databases: MEDLINE, Lilacs/SciELO, and Embase. Selected studies were classified based on the degree of recommendation and levels of scientific evidence according to the “Oxford Centre for Evidence-based Medicine”. The following aspects were reviewed: cutaneous clinical and laboratorial investigations, i.e. skin clinical exam, smears, and biopsy, and Mitsuda's reaction; neurological investigation (anamnesis, electromyography and nerve biopsy; serological investigation and molecular testing, i.e. serological testing for the detection of the phenolic glycolipid 1 (PGL-I and the polymerase chain reaction (PCR; and treatment (classification criteria for the definition of specific treatment, steroid treatment, and cure criteria.

  5. Neural Tube Defects and Pregnancy

    Directory of Open Access Journals (Sweden)

    Emine Çoşar

    2009-09-01

    Full Text Available OBJECTIVE: Neural tube defects are congenital malformations those mostly causing life-long morbidities. They are prevented by the periconseptional folic acid usage and prenatal diagnostic methods. MATERIALS-METHODS: Pregnants from Afyonkarahisar and neighbourhood cities applied to our hospital and determined NTD, were investigated. RESULTS: In our obstetrics clinic 1403 delivery were made and 43 of them had fetus with NTD. Among these fetuses 41.3% had meningomyelocel, 17.4% had meningocel, 21.7% had encephalocel, 8.7% had unencephali and 4.3% had iniencephali. CONCLUSION: Incidence of NTD is high in our region and geographic region, nutrition and other socioeconomic factors may be related to the high incidence. Education of the mother and periconceptional folic acid usage may reduce teh incidence of NTD.

  6. Collision avoidance using neural networks

    Science.gov (United States)

    Sugathan, Shilpa; Sowmya Shree, B. V.; Warrier, Mithila R.; Vidhyapathi, C. M.

    2017-11-01

    Now a days, accidents on roads are caused due to the negligence of drivers and pedestrians or due to unexpected obstacles that come into the vehicle’s path. In this paper, a model (robot) is developed to assist drivers for a smooth travel without accidents. It reacts to the real time obstacles on the four critical sides of the vehicle and takes necessary action. The sensor used for detecting the obstacle was an IR proximity sensor. A single layer perceptron neural network is used to train and test all possible combinations of sensors result by using Matlab (offline). A microcontroller (ARM Cortex-M3 LPC1768) is used to control the vehicle through the output data which is received from Matlab via serial communication. Hence, the vehicle becomes capable of reacting to any combination of real time obstacles.

  7. Neural networks: a biased overview

    International Nuclear Information System (INIS)

    Domany, E.

    1988-01-01

    An overview of recent activity in the field of neural networks is presented. The long-range aim of this research is to understand how the brain works. First some of the problems are stated and terminology defined; then an attempt is made to explain why physicists are drawn to the field, and their main potential contribution. In particular, in recent years some interesting models have been introduced by physicists. A small subset of these models is described, with particular emphasis on those that are analytically soluble. Finally a brief review of the history and recent developments of single- and multilayer perceptrons is given, bringing the situation up to date regarding the central immediate problem of the field: search for a learning algorithm that has an associated convergence theorem

  8. Application of neural network to CT

    International Nuclear Information System (INIS)

    Ma, Xiao-Feng; Takeda, Tatsuoki

    1999-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multilayer neural network. Multilayer neural networks are extensively investigated and practically applied to solution of various problems such as inverse problems or time series prediction problems. From learning an input-output mapping from a set of examples, neural networks can be regarded as synthesizing an approximation of multidimensional function (that is, solving the problem of hypersurface reconstruction, including smoothing and interpolation). From this viewpoint, neural networks are well suited to the solution of CT image reconstruction. Though a conventionally used object function of a neural network is composed of a sum of squared errors of the output data, we can define an object function composed of a sum of residue of an integral equation. By employing an appropriate line integral for this integral equation, we can construct a neural network that can be used for CT. We applied this method to some model problems and obtained satisfactory results. As it is not necessary to discretized the integral equation using this reconstruction method, therefore it is application to the problem of complicated geometrical shapes is also feasible. Moreover, in neural networks, interpolation is performed quite smoothly, as a result, inverse mapping can be achieved smoothly even in case of including experimental and numerical errors, However, use of conventional back propagation technique for optimization leads to an expensive computation cost. To overcome this drawback, 2nd order optimization methods or parallel computing will be applied in future. (J.P.N.)

  9. Nonequilibrium landscape theory of neural networks.

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  10. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  11. Permutation parity machines for neural synchronization

    International Nuclear Information System (INIS)

    Reyes, O M; Kopitzke, I; Zimmermann, K-H

    2009-01-01

    Synchronization of neural networks has been studied in recent years as an alternative to cryptographic applications such as the realization of symmetric key exchange protocols. This paper presents a first view of the so-called permutation parity machine, an artificial neural network proposed as a binary variant of the tree parity machine. The dynamics of the synchronization process by mutual learning between permutation parity machines is analytically studied and the results are compared with those of tree parity machines. It will turn out that for neural synchronization, permutation parity machines form a viable alternative to tree parity machines

  12. Enhancing neural-network performance via assortativity

    International Nuclear Information System (INIS)

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-01-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  13. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  14. Stock market index prediction using neural networks

    Science.gov (United States)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  15. Mass reconstruction with a neural network

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Roegnvaldsson, T.

    1992-01-01

    A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W→qanti q, where W-bosons are produced in panti p reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using 'intelligent' variables in instances when the amount of training instances is limited. (orig.)

  16. Neural network recognition of mammographic lesions

    International Nuclear Information System (INIS)

    Oldham, W.J.B.; Downes, P.T.; Hunter, V.

    1987-01-01

    A method for recognition of mammographic lesions through the use of neural networks is presented. Neural networks have exhibited the ability to learn the shape andinternal structure of patterns. Digitized mammograms containing circumscribed and stelate lesions were used to train a feedfoward synchronous neural network that self-organizes to stable attractor states. Encoding of data for submission to the network was accomplished by performing a fractal analysis of the digitized image. This results in scale invariant representation of the lesions. Results are discussed

  17. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  18. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  19. Applications of neural network to numerical analyses

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki; Fukuhara, Makoto; Ma, Xiao-Feng; Liaqat, Ali

    1999-01-01

    Applications of a multi-layer neural network to numerical analyses are described. We are mainly concerned with the computed tomography and the solution of differential equations. In both cases as the objective functions for the training process of the neural network we employed residuals of the integral equation or the differential equations. This is different from the conventional neural network training where sum of the squared errors of the output values is adopted as the objective function. For model problems both the methods gave satisfactory results and the methods are considered promising for some kind of problems. (author)

  20. A neural network approach to burst detection.

    Science.gov (United States)

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.

  1. Multistability in bidirectional associative memory neural networks

    International Nuclear Information System (INIS)

    Huang Gan; Cao Jinde

    2008-01-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2n-dimensional networks can have 3 n equilibria and 2 n equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results

  2. Multistability in bidirectional associative memory neural networks

    Science.gov (United States)

    Huang, Gan; Cao, Jinde

    2008-04-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2 n-dimensional networks can have 3 equilibria and 2 equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results.

  3. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    Science.gov (United States)

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  4. Neural correlates and neural computations in posterior parietal cortex during perceptual decision-making

    Directory of Open Access Journals (Sweden)

    Alexander eHuk

    2012-10-01

    Full Text Available A recent line of work has found remarkable success in relating perceptual decision-making and the spiking activity in the macaque lateral intraparietal area (LIP. In this review, we focus on questions about the neural computations in LIP that are not answered by demonstrations of neural correlates of psychological processes. We highlight three areas of limitations in our current understanding of the precise neural computations that might underlie neural correlates of decisions: (1 empirical questions not yet answered by existing data; (2 implementation issues related to how neural circuits could actually implement the mechanisms suggested by both physiology and psychology; and (3 ecological constraints related to the use of well-controlled laboratory tasks and whether they provide an accurate window on sensorimotor computation. These issues motivate the adoption of a more general encoding-decoding framework that will be fruitful for more detailed contemplation of how neural computations in LIP relate to the formation of perceptual decisions.

  5. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  6. Chondroitin sulfate effects on neural stem cell differentiation.

    Science.gov (United States)

    Canning, David R; Brelsford, Natalie R; Lovett, Neil W

    2016-01-01

    We have investigated the role chondroitin sulfate has on cell interactions during neural plate formation in the early chick embryo. Using tissue culture isolates from the prospective neural plate, we have measured neural gene expression profiles associated with neural stem cell differentiation. Removal of chondroitin sulfate from stage 4 neural plate tissue leads to altered associations of N-cadherin-positive neural progenitors and causes changes in the normal sequence of neural marker gene expression. Absence of chondroitin sulfate in the neural plate leads to reduced Sox2 expression and is accompanied by an increase in the expression of anterior markers of neural regionalization. Results obtained in this study suggest that the presence of chondroitin sulfate in the anterior chick embryo is instrumental in maintaining cells in the neural precursor state.

  7. Neural processing of auditory signals and modular neural control for sound tropism of walking machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Pasemann, Frank; Fischer, Joern

    2005-01-01

    and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right....... The parameters of these networks are optimized by an evolutionary algorithm. In addition, a simple modular neural controller then generates the desired different walking patterns such that the machine walks straight, then turns towards a switched-on sound source, and then stops near to it....

  8. Local Dynamics in Trained Recurrent Neural Networks.

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-23

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  9. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  10. water demand prediction using artificial neural network

    African Journals Online (AJOL)

    user

    2017-01-01

    Jan 1, 2017 ... Interface for activation and deactivation of valves. •. Interface demand ... process could be done and monitored at the computer terminal as expected of a .... [15] Arbib, M. A.The Handbook of Brain Theory and Neural. Networks.

  11. Machine Learning Topological Invariants with Neural Networks

    Science.gov (United States)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  12. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, R.; Pentia, M.

    1997-01-01

    In experimental particle physics, pattern recognition problems, specifically for neural network methods, occur frequently in track finding or feature extraction. Track finding is a combinatorial optimization problem. Given a set of points in Euclidean space, one tries the reconstruction of particle trajectories, subject to smoothness constraints.The basic ingredients in a neural network are the N binary neurons and the synaptic strengths connecting them. In our case the neurons are the segments connecting all possible point pairs.The dynamics of the neural network is given by a local updating rule wich evaluates for each neuron the sign of the 'upstream activity'. An updating rule in the form of sigmoid function is given. The synaptic strengths are defined in terms of angle between the segments and the lengths of the segments implied in the track reconstruction. An algorithm based on Hopfield neural network has been developed and tested on the track coordinates measured by silicon microstrip tracking system

  13. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  14. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  15. Local Dynamics in Trained Recurrent Neural Networks

    Science.gov (United States)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  16. Experimental Demonstrations of Optical Neural Computers

    OpenAIRE

    Hsu, Ken; Brady, David; Psaltis, Demetri

    1988-01-01

    We describe two experiments in optical neural computing. In the first a closed optical feedback loop is used to implement auto-associative image recall. In the second a perceptron-like learning algorithm is implemented with photorefractive holography.

  17. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  18. NEURAL NETWORKS FOR STOCK MARKET OPTION PRICING

    Directory of Open Access Journals (Sweden)

    Sergey A. Sannikov

    2017-03-01

    Full Text Available Introduction: The use of neural networks for non-linear models helps to understand where linear model drawbacks, coused by their specification, reveal themselves. This paper attempts to find this out. The objective of research is to determine the meaning of “option prices calculation using neural networks”. Materials and Methods: We use two kinds of variables: endogenous (variables included in the model of neural network and variables affecting on the model (permanent disturbance. Results: All data are divided into 3 sets: learning, affirming and testing. All selected variables are normalised from 0 to 1. Extreme values of income were shortcut. Discussion and Conclusions: Using the 33-14-1 neural network with direct links we obtained two sets of forecasts. Optimal criteria of strategies in stock markets’ option pricing were developed.

  19. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  20. Neural adaptations to electrical stimulation strength training

    NARCIS (Netherlands)

    Hortobagyi, Tibor; Maffiuletti, Nicola A.

    2011-01-01

    This review provides evidence for the hypothesis that electrostimulation strength training (EST) increases the force of a maximal voluntary contraction (MVC) through neural adaptations in healthy skeletal muscle. Although electrical stimulation and voluntary effort activate muscle differently, there

  1. Optimal neural computations require analog processors

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper discusses some of the limitations of hardware implementations of neural networks. The authors start by presenting neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural networks. Further, the focus will be on hardware imposed constraints. They will present recent results for three different alternatives of parallel implementations of neural networks: digital circuits, threshold gate circuits, and analog circuits. The area and the delay will be related to the neurons` fan-in and to the precision of their synaptic weights. The main conclusion is that hardware-efficient solutions require analog computations, and suggests the following two alternatives: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow the use of the third dimension (e.g. using optical interconnections).

  2. Artificial neural networks for plasma spectroscopy analysis

    International Nuclear Information System (INIS)

    Morgan, W.L.; Larsen, J.T.; Goldstein, W.H.

    1992-01-01

    Artificial neural networks have been applied to a variety of signal processing and image recognition problems. Of the several common neural models the feed-forward, back-propagation network is well suited for the analysis of scientific laboratory data, which can be viewed as a pattern recognition problem. The authors present a discussion of the basic neural network concepts and illustrate its potential for analysis of experiments by applying it to the spectra of laser produced plasmas in order to obtain estimates of electron temperatures and densities. Although these are high temperature and density plasmas, the neural network technique may be of interest in the analysis of the low temperature and density plasmas characteristic of experiments and devices in gaseous electronics

  3. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  4. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  5. Neural activation in stress-related exhaustion

    DEFF Research Database (Denmark)

    Gavelin, Hanna Malmberg; Neely, Anna Stigsdotter; Andersson, Micael

    2017-01-01

    The primary purpose of this study was to investigate the association between burnout and neural activation during working memory processing in patients with stress-related exhaustion. Additionally, we investigated the neural effects of cognitive training as part of stress rehabilitation. Fifty...... association between burnout level and working memory performance was found, however, our findings indicate that frontostriatal neural responses related to working memory were modulated by burnout severity. We suggest that patients with high levels of burnout need to recruit additional cognitive resources...... to uphold task performance. Following cognitive training, increased neural activation was observed during 3-back in working memory-related regions, including the striatum, however, low sample size limits any firm conclusions....

  6. Front Propagation in Stochastic Neural Fields

    KAUST Repository

    Bressloff, Paul C.; Webber, Matthew A.

    2012-01-01

    We analyze the effects of extrinsic multiplicative noise on front propagation in a scalar neural field with excitatory connections. Using a separation of time scales, we represent the fluctuating front in terms of a diffusive-like displacement

  7. Diagnosis method utilizing neural networks

    International Nuclear Information System (INIS)

    Watanabe, K.; Tamayama, K.

    1990-01-01

    Studies have been made on the technique of neural networks, which will be used to identify a cause of a small anomalous state in the reactor coolant system of the ATR (Advance Thermal Reactor). Three phases of analyses were carried out in this study. First, simulation for 100 seconds was made to determine how the plant parameters respond after the occurence of a transient decrease in reactivity, flow rate and temperature of feed water and increase in the steam flow rate and steam pressure, which would produce a decrease of water level in a steam drum of the ATR. Next, the simulation data was analysed utilizing an autoregressive model. From this analysis, a total of 36 coherency functions up to 0.5 Hz in each transient were computed among nine important and detectable plant parameters: neutron flux, flow rate of coolant, steam or feed water, water level in the steam drum, pressure and opening area of control valve in a steam pipe, feed water temperature and electrical power. Last, learning of neural networks composed of 96 input, 4-9 hidden and 5 output layer units was done by use of the generalized delta rule, namely a back-propagation algorithm. These convergent computations were continued as far as the difference between the desired outputs, 1 for direct cause or 0 for four other ones and actual outputs reached less than 10%. (1) Coherency functions were not governed by decreasing rate of reactivity in the range of 0.41x10 -2 dollar/s to 1.62x10 -2 dollar /s or by decreasing depth of the feed water temperature in the range of 3 deg C to 10 deg C or by a change of 10% or less in the three other causes. Change in coherency functions only depended on the type of cause. (2) The direct cause from the other four ones could be discriminated with 0.94+-0.01 of output level. A maximum of 0.06 output height was found among the other four causes. (3) Calculation load which is represented as products of learning times and numbers of the hidden units did not depend on the

  8. Parameter extraction with neural networks

    Science.gov (United States)

    Cazzanti, Luca; Khan, Mumit; Cerrina, Franco

    1998-06-01

    In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs

  9. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  10. Neural Elements for Predictive Coding.

    Science.gov (United States)

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural

  11. Neural control of magnetic suspension systems

    Science.gov (United States)

    Gray, W. Steven

    1993-01-01

    The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.

  12. Nonlinear programming with feedforward neural networks.

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  13. Feedforward Nonlinear Control Using Neural Gas Network

    OpenAIRE

    Machón-González, Iván; López-García, Hilario

    2017-01-01

    Nonlinear systems control is a main issue in control theory. Many developed applications suffer from a mathematical foundation not as general as the theory of linear systems. This paper proposes a control strategy of nonlinear systems with unknown dynamics by means of a set of local linear models obtained by a supervised neural gas network. The proposed approach takes advantage of the neural gas feature by which the algorithm yields a very robust clustering procedure. The direct model of the ...

  14. Neural networks and orbit control in accelerators

    International Nuclear Information System (INIS)

    Bozoki, E.; Friedman, A.

    1994-01-01

    An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to 'kicks' and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given

  15. Drift chamber tracking with neural networks

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed

  16. Radioactive fallout and neural tube defects

    Directory of Open Access Journals (Sweden)

    Nejat Akar

    2015-10-01

    Full Text Available Possible link between radioactivity and the occurrence of neural tube defects is a long lasting debate since the Chernobyl nuclear fallout in 1986. A recent report on the incidence of neural defects in the west coast of USA, following Fukushima disaster, brought another evidence for effect of radioactive fallout on the occurrence of NTD’s. Here a literature review was performed focusing on this special subject.

  17. Neural networks, D0, and the SSC

    International Nuclear Information System (INIS)

    Barter, C.; Cutts, D.; Hoftun, J.S.; Partridge, R.A.; Sornborger, A.T.; Johnson, C.T.; Zeller, R.T.

    1989-01-01

    We outline several exploratory studies involving neural network simulations applied to pattern recognition in high energy physics. We describe the D0 data acquisition system and a natual means by which algorithms derived from neural networks techniques may be incorporated into recently developed hardware associated with the D0 MicroVAX farm nodes. Such applications to the event filtering needed by SSC detectors look interesting. 10 refs., 11 figs

  18. Culture of Mouse Neural Stem Cell Precursors

    OpenAIRE

    Currle, D. Spencer; Hu, Jia Sheng; Kolski-Andreaco, Aaron; Monuki, Edwin S.

    2007-01-01

    Primary neural stem cell cultures are useful for studying the mechanisms underlying central nervous system development. Stem cell research will increase our understanding of the nervous system and may allow us to develop treatments for currently incurable brain diseases and injuries. In addition, stem cells should be used for stem cell research aimed at the detailed study of mechanisms of neural differentiation and transdifferentiation and the genetic and environmental signals that direct the...

  19. Neural codes of seeing architectural styles

    OpenAIRE

    Choo, Heeyoung; Nasar, Jack L.; Nikrahei, Bardia; Walther, Dirk B.

    2017-01-01

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people′s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding sugges...

  20. Neural network monitoring of resistive welding

    International Nuclear Information System (INIS)

    Quero, J.M.; Millan, R.L.; Franquelo, L.G.; Canas, J.

    1994-01-01

    Supervision of welding processes is one of the most important and complicated tasks in production lines. Artificial Neural Networks have been applied for modeling and control of ph physical processes. In our paper we propose the use of a neural network classifier for on-line non-destructive testing. This system has been developed and installed in a resistive welding station. Results confirm the validity of this novel approach. (Author) 6 refs

  1. Neural Network Models for Time Series Forecasts

    OpenAIRE

    Tim Hill; Marcus O'Connor; William Remus

    1996-01-01

    Neural networks have been advocated as an alternative to traditional statistical forecasting methods. In the present experiment, time series forecasts produced by neural networks are compared with forecasts from six statistical time series methods generated in a major forecasting competition (Makridakis et al. [Makridakis, S., A. Anderson, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, R. Winkler. 1982. The accuracy of extrapolation (time series) methods: Results of a ...

  2. Neural neworks in a management information systems

    OpenAIRE

    Jana Weinlichová; Michael Štencl

    2009-01-01

    For having retrospection for all over the data which are used, analyzed, evaluated and for a future incident predictions are used Management Information Systems and Business Intelligence. In case of not to be able to apply standard methods of data processing there can be with benefit applied an Artificial Intelligence. In this article will be referred to proofed abilities of Neural Networks. The Neural Networks is supported by many software products related to provide effective solution of ma...

  3. Using neural networks in software repositories

    Science.gov (United States)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  4. Application of neural networks in CRM systems

    Directory of Open Access Journals (Sweden)

    Bojanowska Agnieszka

    2017-01-01

    Full Text Available The central aim of this study is to investigate how to apply artificial neural networks in Customer Relationship Management (CRM. The paper presents several business applications of neural networks in software systems designed to aid CRM, e.g. in deciding on the profitability of building a relationship with a given customer. Furthermore, a framework for a neural-network based CRM software tool is developed. Building beneficial relationships with customers is generating considerable interest among various businesses, and is often mentioned as one of the crucial objectives of enterprises, next to their key aim: to bring satisfactory profit. There is a growing tendency among businesses to invest in CRM systems, which together with an organisational culture of a company aid managing customer relationships. It is the sheer amount of gathered data as well as the need for constant updating and analysis of this breadth of information that may imply the suitability of neural networks for the application in question. Neural networks exhibit considerably higher computational capabilities than sequential calculations because the solution to a problem is obtained without the need for developing a special algorithm. In the majority of presented CRM applications neural networks constitute and are presented as a managerial decision-taking optimisation tool.

  5. Recent Advances in Neural Recording Microsystems

    Directory of Open Access Journals (Sweden)

    Benoit Gosselin

    2011-04-01

    Full Text Available The accelerating pace of research in neuroscience has created a considerable demand for neural interfacing microsystems capable of monitoring the activity of large groups of neurons. These emerging tools have revealed a tremendous potential for the advancement of knowledge in brain research and for the development of useful clinical applications. They can extract the relevant control signals directly from the brain enabling individuals with severe disabilities to communicate their intentions to other devices, like computers or various prostheses. Such microsystems are self-contained devices composed of a neural probe attached with an integrated circuit for extracting neural signals from multiple channels, and transferring the data outside the body. The greatest challenge facing development of such emerging devices into viable clinical systems involves addressing their small form factor and low-power consumption constraints, while providing superior resolution. In this paper, we survey the recent progress in the design and the implementation of multi-channel neural recording Microsystems, with particular emphasis on the design of recording and telemetry electronics. An overview of the numerous neural signal modalities is given and the existing microsystem topologies are covered. We present energy-efficient sensory circuits to retrieve weak signals from neural probes and we compare them. We cover data management and smart power scheduling approaches, and we review advances in low-power telemetry. Finally, we conclude by summarizing the remaining challenges and by highlighting the emerging trends in the field.

  6. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  9. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  10. Preserving information in neural transmission.

    Science.gov (United States)

    Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O

    2009-05-13

    Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.

  11. Neural correlates of pediatric obesity.

    Science.gov (United States)

    Bruce, Amanda S; Martin, Laura E; Savage, Cary R

    2011-06-01

    Childhood obesity rates have increased over the last 40 years and have a detrimental impact on public health. While the causes of the obesity epidemic are complex, obesity ultimately arises from chronic imbalances between energy intake and expenditure. An emerging area of research in obesity has focused on the role of the brain in evaluating the rewarding properties of food and making decisions about what and how much to eat. This article reviews recent scientific literature regarding the brain's role in pediatric food motivation and childhood obesity. The article will begin by reviewing some of the recent literature discussing challenges associated with neuroimaging in children and the relevant developmental brain changes that occur in childhood and adolescence. The article will then review studies regarding neural mechanisms of food motivation and the ability to delay gratification in children and how these responses differ in obese compared to healthy weight children. Increasing our understanding about how brain function and behavior may differ in children will inform future research, obesity prevention, and interventions targeting childhood obesity. Copyright © 2011. Published by Elsevier Inc.

  12. Neural correlates of rhythmic expectancy

    Directory of Open Access Journals (Sweden)

    Theodore P. Zanto

    2006-01-01

    Full Text Available Temporal expectancy is thought to play a fundamental role in the perception of rhythm. This review summarizes recent studies that investigated rhythmic expectancy by recording neuroelectric activity with high temporal resolution during the presentation of rhythmic patterns. Prior event-related brain potential (ERP studies have uncovered auditory evoked responses that reflect detection of onsets, offsets, sustains,and abrupt changes in acoustic properties such as frequency, intensity, and spectrum, in addition to indexing higher-order processes such as auditory sensory memory and the violation of expectancy. In our studies of rhythmic expectancy, we measured emitted responses - a type of ERP that occurs when an expected event is omitted from a regular series of stimulus events - in simple rhythms with temporal structures typical of music. Our observations suggest that middle-latency gamma band (20-60 Hz activity (GBA plays an essential role in auditory rhythm processing. Evoked (phase-locked GBA occurs in the presence of physically presented auditory events and reflects the degree of accent. Induced (non-phase-locked GBA reflects temporally precise expectancies for strongly and weakly accented events in sound patterns. Thus far, these findings support theories of rhythm perception that posit temporal expectancies generated by active neural processes.

  13. Echoes in correlated neural systems

    International Nuclear Information System (INIS)

    Helias, M; Tetzlaff, T; Diesmann, M

    2013-01-01

    Correlations are employed in modern physics to explain microscopic and macroscopic phenomena, like the fractional quantum Hall effect and the Mott insulator state in high temperature superconductors and ultracold atoms. Simultaneously probed neurons in the intact brain reveal correlations between their activity, an important measure to study information processing in the brain that also influences the macroscopic signals of neural activity, like the electroencephalogram (EEG). Networks of spiking neurons differ from most physical systems: the interaction between elements is directed, time delayed, mediated by short pulses and each neuron receives events from thousands of neurons. Even the stationary state of the network cannot be described by equilibrium statistical mechanics. Here we develop a quantitative theory of pairwise correlations in finite-sized random networks of spiking neurons. We derive explicit analytic expressions for the population-averaged cross correlation functions. Our theory explains why the intuitive mean field description fails, how the echo of single action potentials causes an apparent lag of inhibition with respect to excitation and how the size of the network can be scaled while maintaining its dynamical state. Finally, we derive a new criterion for the emergence of collective oscillations from the spectrum of the time-evolution propagator. (paper)

  14. Harnessing migraines for neural regeneration

    Directory of Open Access Journals (Sweden)

    Jonathan M Borkum

    2018-01-01

    Full Text Available The success of naturalistic or therapeutic neuroregeneration likely depends on an internal milieu that facilitates the survival, proliferation, migration, and differentiation of stem cells and their assimilation into neural networks. Migraine attacks are an integrated sequence of physiological processes that may protect the brain from oxidative stress by releasing growth factors, suppressing apoptosis, stimulating neurogenesis, encouraging mitochondrial biogenesis, reducing the production of oxidants, and upregulating antioxidant defenses. Thus, the migraine attack may constitute a physiologic environment conducive to stem cells. In this paper, key components of migraine are reviewed – neurogenic inflammation with release of calcitonin gene-related peptide (CGRP and substance P, plasma protein extravasation, platelet activation, release of serotonin by platelets and likely by the dorsal raphe nucleus, activation of endothelial nitric oxide synthase (eNOS, production of brain-derived neurotrophic factor (BDNF and, in migraine aura, cortical spreading depression – along with their potential neurorestorative aspects. The possibility is considered of using these components to facilitate successful stem cell transplantation. Potential methods for doing so are discussed, including chemical stimulation of the TRPA1 ion channel, conjoint activation of a subset of migraine components, invasive and noninvasive deep brain stimulation of the dorsal raphe nucleus, transcranial focused ultrasound, and stimulation of the Zusanli (ST36 acupuncture point.

  15. Differentiation between non-neural and neural contributors to ankle joint stiffness in cerebral palsy

    NARCIS (Netherlands)

    De Gooijer-van de Groep, K.L.; De Vlugt, E.; De Groot, J.H.; Van der Heijden-Maessen, H.C.M.; Wielheesen, D.H.M.; Van Wijlen-Hempel, R.M.S.; Arendzen, J.H.; Meskers, C.G.M.

    2013-01-01

    Background Spastic paresis in cerebral palsy (CP) is characterized by increased joint stiffness that may be of neural origin, i.e. improper muscle activation caused by e.g. hyperreflexia or non-neural origin, i.e. altered tissue viscoelastic properties (clinically: “spasticity” vs. “contracture”).

  16. Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); S.M. Bohte (Sander)

    2016-01-01

    textabstractBiological neurons communicate with a sparing exchange of pulses - spikes. It is an open question how real spiking neurons produce the kind of powerful neural computation that is possible with deep artificial neural networks, using only so very few spikes to communicate. Building on

  17. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Implantable neurotechnologies: a review of integrated circuit neural amplifiers.

    Science.gov (United States)

    Ng, Kian Ann; Greenwald, Elliot; Xu, Yong Ping; Thakor, Nitish V

    2016-01-01

    Neural signal recording is critical in modern day neuroscience research and emerging neural prosthesis programs. Neural recording requires the use of precise, low-noise amplifier systems to acquire and condition the weak neural signals that are transduced through electrode interfaces. Neural amplifiers and amplifier-based systems are available commercially or can be designed in-house and fabricated using integrated circuit (IC) technologies, resulting in very large-scale integration or application-specific integrated circuit solutions. IC-based neural amplifiers are now used to acquire untethered/portable neural recordings, as they meet the requirements of a miniaturized form factor, light weight and low power consumption. Furthermore, such miniaturized and low-power IC neural amplifiers are now being used in emerging implantable neural prosthesis technologies. This review focuses on neural amplifier-based devices and is presented in two interrelated parts. First, neural signal recording is reviewed, and practical challenges are highlighted. Current amplifier designs with increased functionality and performance and without penalties in chip size and power are featured. Second, applications of IC-based neural amplifiers in basic science experiments (e.g., cortical studies using animal models), neural prostheses (e.g., brain/nerve machine interfaces) and treatment of neuronal diseases (e.g., DBS for treatment of epilepsy) are highlighted. The review concludes with future outlooks of this technology and important challenges with regard to neural signal amplification.

  19. Neural Net Safety Monitor Design

    Science.gov (United States)

    Larson, Richard R.

    2007-01-01

    The National Aeronautics and Space Administration (NASA) at the Dryden Flight Research Center (DFRC) has been conducting flight-test research using an F-15 aircraft (figure 1). This aircraft has been specially modified to interface a neural net (NN) controller as part of a single-string Airborne Research Test System (ARTS) computer with the existing quad-redundant flight control system (FCC) shown in figure 2. The NN commands are passed to FCC channels 2 and 4 and are cross channel data linked (CCDL) to the other computers as shown. Numerous types of fault-detection monitors exist in the FCC when the NN mode is engaged; these monitors would cause an automatic disengagement of the NN in the event of a triggering fault. Unfortunately, these monitors still may not prevent a possible NN hard-over command from coming through to the control laws. Therefore, an additional and unique safety monitor was designed for a single-string source that allows authority at maximum actuator rates but protects the pilot and structural loads against excessive g-limits in the case of a NN hard-over command input. This additional monitor resides in the FCCs and is executed before the control laws are computed. This presentation describes a floating limiter (FL) concept1 that was developed and successfully test-flown for this program (figure 3). The FL computes the rate of change of the NN commands that are input to the FCC from the ARTS. A window is created with upper and lower boundaries, which is constantly floating and trying to stay centered as the NN command rates are changing. The limiter works by only allowing the window to move at a much slower rate than those of the NN commands. Anywhere within the window, however, full rates are allowed. If a rate persists in one direction, it will eventually hit the boundary and be rate-limited to the floating limiter rate. When this happens, a persistent counter begins and after a limit is reached, a NN disengage command is generated. The

  20. Neural systems for tactual memories.

    Science.gov (United States)

    Bonda, E; Petrides, M; Evans, A

    1996-04-01

    1. The aim of this study was to investigate the neural systems involved in the memory processing of experiences through touch. 2. Regional cerebral blood flow was measured with positron emission tomography by means of the water bolus H2(15)O methodology in human subjects as they performed tasks involving different levels of tactual memory. In one of the experimental tasks, the subjects had to palpate nonsense shapes to match each one to a previously learned set, thus requiring constant reference to long-term memory. The other experimental task involved judgements of the recent recurrence of shapes during the scanning period. A set of three control tasks was used to control for the type of exploratory movements and sensory processing inherent in the two experimental tasks. 3. Comparisons of the distribution of activity between the experimental and the control tasks were carried out by means of the subtraction method. In relation to the control conditions, the two experimental tasks requiring memory resulted in significant changes within the posteroventral insula and the central opercular region. In addition, the task requiring recall from long-term memory yielded changes in the perirhinal cortex. 4. The above findings demonstrated that a ventrally directed parietoinsular pathway, leading to the posteroventral insula and the perirhinal cortex, constitutes a system by which long-lasting representations of tactual experiences are formed. It is proposed that the posteroventral insula is involved in tactual feature analysis, by analogy with the similar role of the inferotemporal cortex in vision, whereas the perirhinal cortex is further involved in the integration of these features into long-lasting representations of somatosensory experiences.

  1. The equilibrium of neural firing: A mathematical theory

    Energy Technology Data Exchange (ETDEWEB)

    Lan, Sizhong, E-mail: lsz@fuyunresearch.org [Fuyun Research, Beijing, 100055 (China)

    2014-12-15

    Inspired by statistical thermodynamics, we presume that neuron system has equilibrium condition with respect to neural firing. We show that, even with dynamically changeable neural connections, it is inevitable for neural firing to evolve to equilibrium. To study the dynamics between neural firing and neural connections, we propose an extended communication system where noisy channel has the tendency towards fixed point, implying that neural connections are always attracted into fixed points such that equilibrium can be reached. The extended communication system and its mathematics could be useful back in thermodynamics.

  2. Neural substrates of decision-making.

    Science.gov (United States)

    Broche-Pérez, Y; Herrera Jiménez, L F; Omar-Martínez, E

    2016-06-01

    Decision-making is the process of selecting a course of action from among 2 or more alternatives by considering the potential outcomes of selecting each option and estimating its consequences in the short, medium and long term. The prefrontal cortex (PFC) has traditionally been considered the key neural structure in decision-making process. However, new studies support the hypothesis that describes a complex neural network including both cortical and subcortical structures. The aim of this review is to summarise evidence on the anatomical structures underlying the decision-making process, considering new findings that support the existence of a complex neural network that gives rise to this complex neuropsychological process. Current evidence shows that the cortical structures involved in decision-making include the orbitofrontal cortex (OFC), anterior cingulate cortex (ACC), and dorsolateral prefrontal cortex (DLPFC). This process is assisted by subcortical structures including the amygdala, thalamus, and cerebellum. Findings to date show that both cortical and subcortical brain regions contribute to the decision-making process. The neural basis of decision-making is a complex neural network of cortico-cortical and cortico-subcortical connections which includes subareas of the PFC, limbic structures, and the cerebellum. Copyright © 2014 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.

  3. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  4. Tuning Neural Phase Entrainment to Speech.

    Science.gov (United States)

    Falk, Simone; Lanzilotti, Cosima; Schön, Daniele

    2017-08-01

    Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.

  5. Race modulates neural activity during imitation

    Science.gov (United States)

    Losin, Elizabeth A. Reynolds; Iacoboni, Marco; Martin, Alia; Cross, Katy A.; Dapretto, Mirella

    2014-01-01

    Imitation plays a central role in the acquisition of culture. People preferentially imitate others who are self-similar, prestigious or successful. Because race can indicate a person's self-similarity or status, race influences whom people imitate. Prior studies of the neural underpinnings of imitation have not considered the effects of race. Here we measured neural activity with fMRI while European American participants imitated meaningless gestures performed by actors of their own race, and two racial outgroups, African American, and Chinese American. Participants also passively observed the actions of these actors and their portraits. Frontal, parietal and occipital areas were differentially activated while participants imitated actors of different races. More activity was present when imitating African Americans than the other racial groups, perhaps reflecting participants' reported lack of experience with and negative attitudes towards this group, or the group's lower perceived social status. This pattern of neural activity was not found when participants passively observed the gestures of the actors or simply looked at their faces. Instead, during face-viewing neural responses were overall greater for own-race individuals, consistent with prior race perception studies not involving imitation. Our findings represent a first step in elucidating neural mechanisms involved in cultural learning, a process that influences almost every aspect of our lives but has thus far received little neuroscientific study. PMID:22062193

  6. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  7. Weather forecasting based on hybrid neural model

    Science.gov (United States)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-11-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  8. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  9. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  10. Neural codes of seeing architectural styles.

    Science.gov (United States)

    Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B

    2017-01-10

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.

  11. Inverting radiometric measurements with a neural network

    Science.gov (United States)

    Measure, Edward M.; Yee, Young P.; Balding, Jeff M.; Watkins, Wendell R.

    1992-02-01

    A neural network scheme for retrieving remotely sensed vertical temperature profiles was applied to observed ground based radiometer measurements. The neural network used microwave radiance measurements and surface measurements of temperature and pressure as inputs. Because the microwave radiometer is capable of measuring 4 oxygen channels at 5 different elevation angles (9, 15, 25, 40, and 90 degs), 20 microwave measurements are potentially available. Because these measurements have considerable redundancy, a neural network was experimented with, accepting as inputs microwave measurements taken at 53.88 GHz, 40 deg; 57.45 GHz, 40 deg; and 57.45, 90 deg. The primary test site was located at White Sands Missile Range (WSMR), NM. Results are compared with measurements made simultaneously with balloon borne radiosonde instruments and with radiometric temperature retrievals made using more conventional retrieval algorithms. The neural network was trained using a Widrow-Hoff delta rule procedure. Functions of date to include season dependence in the retrieval process and functions of time to include diurnal effects were used as inputs to the neural network.

  12. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  13. Efficient Cancer Detection Using Multiple Neural Networks.

    Science.gov (United States)

    Shell, John; Gregory, William D

    2017-01-01

    The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings.

  14. Estimation of neural energy in microelectrode signals

    Science.gov (United States)

    Gaumond, R. P.; Clement, R.; Silva, R.; Sander, D.

    2004-09-01

    We considered the problem of determining the neural contribution to the signal recorded by an intracortical electrode. We developed a linear least-squares approach to determine the energy fraction of a signal attributable to an arbitrary number of autocorrelation-defined signals buried in noise. Application of the method requires estimation of autocorrelation functions Rap(tgr) characterizing the action potential (AP) waveforms and Rn(tgr) characterizing background noise. This method was applied to the analysis of chronically implanted microelectrode signals from motor cortex of rat. We found that neural (AP) energy consisted of a large-signal component which grows linearly with the number of threshold-detected neural events and a small-signal component unrelated to the count of threshold-detected AP signals. The addition of pseudorandom noise to electrode signals demonstrated the algorithm's effectiveness for a wide range of noise-to-signal energy ratios (0.08 to 39). We suggest, therefore, that the method could be of use in providing a measure of neural response in situations where clearly identified spike waveforms cannot be isolated, or in providing an additional 'background' measure of microelectrode neural activity to supplement the traditional AP spike count.

  15. Neural network models of categorical perception.

    Science.gov (United States)

    Damper, R I; Harnad, S R

    2000-05-01

    Studies of the categorical perception (CP) of sensory continua have a long and rich history in psychophysics. In 1977, Macmillan, Kaplan, and Creelman introduced the use of signal detection theory to CP studies. Anderson and colleagues simultaneously proposed the first neural model for CP, yet this line of research has been less well explored. In this paper, we assess the ability of neural-network models of CP to predict the psychophysical performance of real observers with speech sounds and artificial/novel stimuli. We show that a variety of neural mechanisms are capable of generating the characteristics of CP. Hence, CP may not be a special model of perception but an emergent property of any sufficiently powerful general learning system.

  16. Modelling collective cell migration of neural crest.

    Science.gov (United States)

    Szabó, András; Mayor, Roberto

    2016-10-01

    Collective cell migration has emerged in the recent decade as an important phenomenon in cell and developmental biology and can be defined as the coordinated and cooperative movement of groups of cells. Most studies concentrate on tightly connected epithelial tissues, even though collective migration does not require a constant physical contact. Movement of mesenchymal cells is more independent, making their emergent collective behaviour less intuitive and therefore lending importance to computational modelling. Here we focus on such modelling efforts that aim to understand the collective migration of neural crest cells, a mesenchymal embryonic population that migrates large distances as a group during early vertebrate development. By comparing different models of neural crest migration, we emphasize the similarity and complementary nature of these approaches and suggest a future direction for the field. The principles derived from neural crest modelling could aid understanding the collective migration of other mesenchymal cell types. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The neural basis of financial risk taking.

    Science.gov (United States)

    Kuhnen, Camelia M; Knutson, Brian

    2005-09-01

    Investors systematically deviate from rationality when making financial decisions, yet the mechanisms responsible for these deviations have not been identified. Using event-related fMRI, we examined whether anticipatory neural activity would predict optimal and suboptimal choices in a financial decision-making task. We characterized two types of deviations from the optimal investment strategy of a rational risk-neutral agent as risk-seeking mistakes and risk-aversion mistakes. Nucleus accumbens activation preceded risky choices as well as risk-seeking mistakes, while anterior insula activation preceded riskless choices as well as risk-aversion mistakes. These findings suggest that distinct neural circuits linked to anticipatory affect promote different types of financial choices and indicate that excessive activation of these circuits may lead to investing mistakes. Thus, consideration of anticipatory neural mechanisms may add predictive power to the rational actor model of economic decision making.

  18. Deep Neural Network Detects Quantum Phase Transition

    Science.gov (United States)

    Arai, Shunta; Ohzeki, Masayuki; Tanaka, Kazuyuki

    2018-03-01

    We detect the quantum phase transition of a quantum many-body system by mapping the observed results of the quantum state onto a neural network. In the present study, we utilized the simplest case of a quantum many-body system, namely a one-dimensional chain of Ising spins with the transverse Ising model. We prepared several spin configurations, which were obtained using repeated observations of the model for a particular strength of the transverse field, as input data for the neural network. Although the proposed method can be employed using experimental observations of quantum many-body systems, we tested our technique with spin configurations generated by a quantum Monte Carlo simulation without initial relaxation. The neural network successfully identified the strength of transverse field only from the spin configurations, leading to consistent estimations of the critical point of our model Γc = J.

  19. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  20. Deciphering Neural Codes of Memory during Sleep

    Science.gov (United States)

    Chen, Zhe; Wilson, Matthew A.

    2017-01-01

    Memories of experiences are stored in the cerebral cortex. Sleep is critical for consolidating hippocampal memory of wake experiences into the neocortex. Understanding representations of neural codes of hippocampal-neocortical networks during sleep would reveal important circuit mechanisms on memory consolidation, and provide novel insights into memory and dreams. Although sleep-associated ensemble spike activity has been investigated, identifying the content of memory in sleep remains challenging. Here, we revisit important experimental findings on sleep-associated memory (i.e., neural activity patterns in sleep that reflect memory processing) and review computational approaches for analyzing sleep-associated neural codes (SANC). We focus on two analysis paradigms for sleep-associated memory, and propose a new unsupervised learning framework (“memory first, meaning later”) for unbiased assessment of SANC. PMID:28390699

  1. Electrospun Nanofibrous Materials for Neural Tissue Engineering

    Directory of Open Access Journals (Sweden)

    Yee-Shuan Lee

    2011-02-01

    Full Text Available The use of biomaterials processed by the electrospinning technique has gained considerable interest for neural tissue engineering applications. The tissue engineering strategy is to facilitate the regrowth of nerves by combining an appropriate cell type with the electrospun scaffold. Electrospinning can generate fibrous meshes having fiber diameter dimensions at the nanoscale and these fibers can be nonwoven or oriented to facilitate neurite extension via contact guidance. This article reviews studies evaluating the effect of the scaffold’s architectural features such as fiber diameter and orientation on neural cell function and neurite extension. Electrospun meshes made of natural polymers, proteins and compositions having electrical activity in order to enhance neural cell function are also discussed.

  2. Dysfunction of Rapid Neural Adaptation in Dyslexia.

    Science.gov (United States)

    Perrachione, Tyler K; Del Tufo, Stephanie N; Winter, Rebecca; Murtagh, Jack; Cyr, Abigail; Chang, Patricia; Halverson, Kelly; Ghosh, Satrajit S; Christodoulou, Joanna A; Gabrieli, John D E

    2016-12-21

    Identification of specific neurophysiological dysfunctions resulting in selective reading difficulty (dyslexia) has remained elusive. In addition to impaired reading development, individuals with dyslexia frequently exhibit behavioral deficits in perceptual adaptation. Here, we assessed neurophysiological adaptation to stimulus repetition in adults and children with dyslexia for a wide variety of stimuli, spoken words, written words, visual objects, and faces. For every stimulus type, individuals with dyslexia exhibited significantly diminished neural adaptation compared to controls in stimulus-specific cortical areas. Better reading skills in adults and children with dyslexia were associated with greater repetition-induced neural adaptation. These results highlight a dysfunction of rapid neural adaptation as a core neurophysiological difference in dyslexia that may underlie impaired reading development. Reduced neurophysiological adaptation may relate to prior reports of reduced behavioral adaptation in dyslexia and may reveal a difference in brain functions that ultimately results in a specific reading impairment. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Open quantum generalisation of Hopfield neural networks

    Science.gov (United States)

    Rotondo, P.; Marcuzzi, M.; Garrahan, J. P.; Lesanovsky, I.; Müller, M.

    2018-03-01

    We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.

  4. Neural Correlates of Boredom in Music Perception

    Directory of Open Access Journals (Sweden)

    Ashkan Fakhr Tabatabaie

    2014-11-01

    Full Text Available Introduction: Music can elicit powerful emotional responses, the neural correlates of which have not been properly understood. An important aspect about the quality of any musical piece is its ability to elicit a sense of excitement in the listeners. In this study, we investigated the neural correlates of boredom evoked by music in human subjects. Methods: We used EEG recording in nine subjects while they were listening to total number of 10 short-length (83 sec musical pieces with various boredom indices. Subjects evaluated boringness of musical pieces while their EEG was recording. Results: Using short time Fourier analysis, we found that beta2 rhythm was (16-20 Hz significantly lower whenever the subjects rated the music as boring in comparison to nonboring. Discussion: The results demonstrate that the music modulates neural activity of various partsof the brain and can be measured using EEG.

  5. Normalization as a canonical neural computation

    Science.gov (United States)

    Carandini, Matteo; Heeger, David J.

    2012-01-01

    There is increasing evidence that the brain relies on a set of canonical neural computations, repeating them across brain regions and modalities to apply similar operations to different problems. A promising candidate for such a computation is normalization, in which the responses of neurons are divided by a common factor that typically includes the summed activity of a pool of neurons. Normalization was developed to explain responses in the primary visual cortex and is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions. Normalization may underlie operations such as the representation of odours, the modulatory effects of visual attention, the encoding of value and the integration of multisensory information. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that it serves as a canonical neural computation. PMID:22108672

  6. Reconstruction of neutron spectra through neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2003-01-01

    A neural network has been used to reconstruct the neutron spectra starting from the counting rates of the detectors of the Bonner sphere spectrophotometric system. A group of 56 neutron spectra was selected to calculate the counting rates that would produce in a Bonner sphere system, with these data and the spectra it was trained the neural network. To prove the performance of the net, 12 spectra were used, 6 were taken of the group used for the training, 3 were obtained of mathematical functions and those other 3 correspond to real spectra. When comparing the original spectra of those reconstructed by the net we find that our net has a poor performance when reconstructing monoenergetic spectra, this attributes it to those characteristic of the spectra used for the training of the neural network, however for the other groups of spectra the results of the net are appropriate with the prospective ones. (Author)

  7. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  8. Eddy Current Flaw Characterization Using Neural Networks

    International Nuclear Information System (INIS)

    Song, S. J.; Park, H. J.; Shin, Y. K.

    1998-01-01

    Determination of location, shape and size of a flaw from its eddy current testing signal is one of the fundamental issues in eddy current nondestructive evaluation of steam generator tubes. Here, we propose an approach to this problem; an inversion of eddy current flaw signal using neural networks trained by finite element model-based synthetic signatures. Total 216 eddy current signals from four different types of axisymmetric flaws in tubes are generated by finite element models of which the accuracy is experimentally validated. From each simulated signature, total 24 eddy current features are extracted and among them 13 features are finally selected for flaw characterization. Based on these features, probabilistic neural networks discriminate flaws into four different types according to the location and the shape, and successively back propagation neural networks determine the size parameters of the discriminated flaw

  9. Neural Network Classifiers for Local Wind Prediction.

    Science.gov (United States)

    Kretzschmar, Ralf; Eckert, Pierre; Cattani, Daniel; Eggimann, Fritz

    2004-05-01

    This paper evaluates the quality of neural network classifiers for wind speed and wind gust prediction with prediction lead times between +1 and +24 h. The predictions were realized based on local time series and model data. The selection of appropriate input features was initiated by time series analysis and completed by empirical comparison of neural network classifiers trained on several choices of input features. The selected input features involved day time, yearday, features from a single wind observation device at the site of interest, and features derived from model data. The quality of the resulting classifiers was benchmarked against persistence for two different sites in Switzerland. The neural network classifiers exhibited superior quality when compared with persistence judged on a specific performance measure, hit and false-alarm rates.

  10. Metabolic neural mapping in neonatal rats

    International Nuclear Information System (INIS)

    DiRocco, R.J.; Hall, W.G.

    1981-01-01

    Functional neural mapping by 14 C-deoxyglucose autoradiography in adult rats has shown that increases in neural metabolic rate that are coupled to increased neurophysiological activity are more evident in axon terminals and dendrites than neuron cell bodies. Regions containing architectonically well-defined concentrations of terminals and dendrites (neuropil) have high metabolic rates when the neuropil is physiologically active. In neonatal rats, however, we find that regions containing well-defined groupings of neuron cell bodies have high metabolic rates in 14 C-deoxyglucose autoradiograms. The striking difference between the morphological appearance of 14 C-deoxyglucose autoradiograms obtained from neonatal and adult rats is probably related to developmental changes in morphometric features of differentiating neurons, as well as associated changes in type and locus of neural work performed

  11. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. File list: ALL.Neu.05.AllAg.Induced_neural_progenitors [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available ALL.Neu.05.AllAg.Induced_neural_progenitors mm9 All antigens Neural Induced neural progeni....biosciencedbc.jp/kyushu-u/mm9/assembled/ALL.Neu.05.AllAg.Induced_neural_progenitors.bed ...

  13. File list: ALL.Neu.10.AllAg.Induced_neural_progenitors [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available ALL.Neu.10.AllAg.Induced_neural_progenitors mm9 All antigens Neural Induced neural progeni....biosciencedbc.jp/kyushu-u/mm9/assembled/ALL.Neu.10.AllAg.Induced_neural_progenitors.bed ...

  14. Neural complexity: A graph theoretic interpretation

    Science.gov (United States)

    Barnett, L.; Buckley, C. L.; Bullock, S.

    2011-04-01

    One of the central challenges facing modern neuroscience is to explain the ability of the nervous system to coherently integrate information across distinct functional modules in the absence of a central executive. To this end, Tononi [Proc. Natl. Acad. Sci. USA.PNASA60027-842410.1073/pnas.91.11.5033 91, 5033 (1994)] proposed a measure of neural complexity that purports to capture this property based on mutual information between complementary subsets of a system. Neural complexity, so defined, is one of a family of information theoretic metrics developed to measure the balance between the segregation and integration of a system’s dynamics. One key question arising for such measures involves understanding how they are influenced by network topology. Sporns [Cereb. Cortex53OPAV1047-321110.1093/cercor/10.2.127 10, 127 (2000)] employed numerical models in order to determine the dependence of neural complexity on the topological features of a network. However, a complete picture has yet to be established. While De Lucia [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.71.016114 71, 016114 (2005)] made the first attempts at an analytical account of this relationship, their work utilized a formulation of neural complexity that, we argue, did not reflect the intuitions of the original work. In this paper we start by describing weighted connection matrices formed by applying a random continuous weight distribution to binary adjacency matrices. This allows us to derive an approximation for neural complexity in terms of the moments of the weight distribution and elementary graph motifs. In particular, we explicitly establish a dependency of neural complexity on cyclic graph motifs.

  15. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  16. The Effects of GABAergic Polarity Changes on Episodic Neural Network Activity in Developing Neural Systems

    Directory of Open Access Journals (Sweden)

    Wilfredo Blanco

    2017-09-01

    Full Text Available Early in development, neural systems have primarily excitatory coupling, where even GABAergic synapses are excitatory. Many of these systems exhibit spontaneous episodes of activity that have been characterized through both experimental and computational studies. As development progress the neural system goes through many changes, including synaptic remodeling, intrinsic plasticity in the ion channel expression, and a transformation of GABAergic synapses from excitatory to inhibitory. What effect each of these, and other, changes have on the network behavior is hard to know from experimental studies since they all happen in parallel. One advantage of a computational approach is that one has the ability to study developmental changes in isolation. Here, we examine the effects of GABAergic synapse polarity change on the spontaneous activity of both a mean field and a neural network model that has both glutamatergic and GABAergic coupling, representative of a developing neural network. We find some intuitive behavioral changes as the GABAergic neurons go from excitatory to inhibitory, shared by both models, such as a decrease in the duration of episodes. We also find some paradoxical changes in the activity that are only present in the neural network model. In particular, we find that during early development the inter-episode durations become longer on average, while later in development they become shorter. In addressing this unexpected finding, we uncover a priming effect that is particularly important for a small subset of neurons, called the “intermediate neurons.” We characterize these neurons and demonstrate why they are crucial to episode initiation, and why the paradoxical behavioral change result from priming of these neurons. The study illustrates how even arguably the simplest of developmental changes that occurs in neural systems can present non-intuitive behaviors. It also makes predictions about neural network behavioral changes

  17. Neural networks prove effective at NOx reduction

    Energy Technology Data Exchange (ETDEWEB)

    Radl, B.J. [Pegasus Technologies, Mentor, OH (USA)

    2000-05-01

    The availability of low cost computer hardware and software is opening up possibilities for the use of artificial intelligence concepts, notably neural networks, in power plant control applications, delivering lower costs, greater efficiencies and reduced emissions. One example of a neural network system is the NeuSIGHT combustion optimisation system, developed by Pegasus Technologies, a subsidiary of KFx Inc. It can help reduce NOx emissions, improve heat rate and enable either deferral or elimination of capital expenditures. on other NOx control technologies, such as low NOx burners, SNCR and SCR. This paper illustrates these benefits using three recent case studies. 4 figs.

  18. Top tagging with deep neural networks [Vidyo

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  19. A neural theory of visual attention

    DEFF Research Database (Denmark)

    Bundesen, Claus; Habekost, Thomas; Kyllingsbæk, Søren

    2005-01-01

    A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing...... resources (cells) are devoted to behaviorally important objects than to less important ones. By use of the same basic equations used in TVA, NTVA accounts for a wide range of known attentional effects in human performance (reaction times and error rates) and a wide range of effects observed in firing rates...

  20. Avoiding object by robot using neural network

    International Nuclear Information System (INIS)

    Prasetijo, D.W.

    1997-01-01

    A Self controlling robot is necessary in the robot application in which operator control is difficult. Serial method such as process on the computer of van newman is difficult to be applied for self controlling robot. In this research, Neural network system for robotic control system was developed by performance expanding at the SCARA. In this research, it was shown that SCARA with application at Neural network system can avoid blocking objects without influence by number and density of the blocking objects, also departure and destination paint. robot developed by this study also can control its moving by self

  1. Associative memory model with spontaneous neural activity

    Science.gov (United States)

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2012-05-01

    We propose a novel associative memory model wherein the neural activity without an input (i.e., spontaneous activity) is modified by an input to generate a target response that is memorized for recall upon the same input. Suitable design of synaptic connections enables the model to memorize input/output (I/O) mappings equaling 70% of the total number of neurons, where the evoked activity distinguishes a target pattern from others. Spontaneous neural activity without an input shows chaotic dynamics but keeps some similarity with evoked activities, as reported in recent experimental studies.

  2. Neural Activity Reveals Preferences Without Choices

    Science.gov (United States)

    Smith, Alec; Bernheim, B. Douglas; Camerer, Colin

    2014-01-01

    We investigate the feasibility of inferring the choices people would make (if given the opportunity) based on their neural responses to the pertinent prospects when they are not engaged in actual decision making. The ability to make such inferences is of potential value when choice data are unavailable, or limited in ways that render standard methods of estimating choice mappings problematic. We formulate prediction models relating choices to “non-choice” neural responses and use them to predict out-of-sample choices for new items and for new groups of individuals. The predictions are sufficiently accurate to establish the feasibility of our approach. PMID:25729468

  3. Musical Audio Synthesis Using Autoencoding Neural Nets

    OpenAIRE

    Sarroff, Andy; Casey, Michael A.

    2014-01-01

    With an optimal network topology and tuning of hyperpa-\\ud rameters, artificial neural networks (ANNs) may be trained\\ud to learn a mapping from low level audio features to one\\ud or more higher-level representations. Such artificial neu-\\ud ral networks are commonly used in classification and re-\\ud gression settings to perform arbitrary tasks. In this work\\ud we suggest repurposing autoencoding neural networks as\\ud musical audio synthesizers. We offer an interactive musi-\\ud cal audio synt...

  4. Neural network tagging in a toy model

    International Nuclear Information System (INIS)

    Milek, Marko; Patel, Popat

    1999-01-01

    The purpose of this study is a comparison of Artificial Neural Network approach to HEP analysis against the traditional methods. A toy model used in this analysis consists of two types of particles defined by four generic properties. A number of 'events' was created according to the model using standard Monte Carlo techniques. Several fully connected, feed forward multi layered Artificial Neural Networks were trained to tag the model events. The performance of each network was compared to the standard analysis mechanisms and significant improvement was observed

  5. Alpha spectral analysis via artificial neural networks

    International Nuclear Information System (INIS)

    Kangas, L.J.; Hashem, S.; Keller, P.E.; Kouzes, R.T.; Troyer, G.L.

    1994-10-01

    An artificial neural network system that assigns quality factors to alpha particle energy spectra is discussed. The alpha energy spectra are used to detect plutonium contamination in the work environment. The quality factors represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with a quality factor by an expert and used in training the artificial neural network expert system. The investigation shows that the expert knowledge of alpha spectra quality factors can be transferred to an ANN system

  6. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  7. Target recognition based on convolutional neural network

    Science.gov (United States)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  8. Neural correlates underlying musical semantic memory.

    Science.gov (United States)

    Groussard, M; Viader, F; Landeau, B; Desgranges, B; Eustache, F; Platel, H

    2009-07-01

    Numerous functional imaging studies have examined the neural basis of semantic memory mainly using verbal and visuospatial materials. Musical material also allows an original way to explore semantic memory processes. We used PET imaging to determine the neural substrates that underlie musical semantic memory using different tasks and stimuli. The results of three PET studies revealed a greater involvement of the anterior part of the temporal lobe. Concerning clinical observations and our neuroimaging data, the musical lexicon (and most widely musical semantic memory) appears to be sustained by a temporo-prefrontal cerebral network involving right and left cerebral regions.

  9. Livermore Big Artificial Neural Network Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  10. Quantitative phase microscopy using deep neural networks

    Science.gov (United States)

    Li, Shuai; Sinha, Ayan; Lee, Justin; Barbastathis, George

    2018-02-01

    Deep learning has been proven to achieve ground-breaking accuracy in various tasks. In this paper, we implemented a deep neural network (DNN) to achieve phase retrieval in a wide-field microscope. Our DNN utilized the residual neural network (ResNet) architecture and was trained using the data generated by a phase SLM. The results showed that our DNN was able to reconstruct the profile of the phase target qualitatively. In the meantime, large error still existed, which indicated that our approach still need to be improved.

  11. Neural network approach to radiologic lesion detection

    International Nuclear Information System (INIS)

    Newman, F.D.; Raff, U.; Stroud, D.

    1989-01-01

    An area of artificial intelligence that has gained recent attention is the neural network approach to pattern recognition. The authors explore the use of neural networks in radiologic lesion detection with what is known in the literature as the novelty filter. This filter uses a linear model; images of normal patterns become training vectors and are stored as columns of a matrix. An image of an abnormal pattern is introduced and the abnormality or novelty is extracted. A VAX 750 was used to encode the novelty filter, and two experiments have been examined

  12. Hindcasting of storm waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, S.; Mandal, S.

    Department NN neural network net i weighted sum of the inputs of neuron i o k network output at kth output node P total number of training pattern s i output of neuron i t k target output at kth output node 1. Introduction Severe storms occur in Bay of Bengal...), forecasting of runoff (Crespo and Mora, 1993), concrete strength (Kasperkiewicz et al., 1995). The uses of neural network in the coastal the wave conditions will change from year to year, thus a proper statistical and climatological treatment requires several...

  13. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  14. Stimulation and recording electrodes for neural prostheses

    CERN Document Server

    Pour Aryan, Naser; Rothermel, Albrecht

    2015-01-01

    This book provides readers with basic principles of the electrochemistry of the electrodes used in modern, implantable neural prostheses. The authors discuss the boundaries and conditions in which the electrodes continue to function properly for long time spans, which are required when designing neural stimulator devices for long-term in vivo applications. Two kinds of electrode materials, titanium nitride and iridium are discussed extensively, both qualitatively and quantitatively. The influence of the counter electrode on the safety margins and electrode lifetime in a two electrode system is explained. Electrode modeling is handled in a final chapter.

  15. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan; Naous, Rawan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.

    2015-01-01

    . Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards

  16. Wave transmission prediction of multilayer floating breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Patil, S.G.; Hegde, A.V.

    In the present study, an artificial neural network method has been applied for wave transmission prediction of multilayer floating breakwater. Two neural network models are constructed based on the parameters which influence the wave transmission...

  17. Stability prediction of berm breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    In the present study, an artificial neural network method has been applied to predict the stability of berm breakwaters. Four neural network models are constructed based on the parameters which influence the stability of breakwater. Training...

  18. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  19. An Artificial Neural Network Controller for Intelligent Transportation Systems Applications

    Science.gov (United States)

    1996-01-01

    An Autonomous Intelligent Cruise Control (AICC) has been designed using a feedforward artificial neural network, as an example for utilizing artificial neural networks for nonlinear control problems arising in intelligent transportation systems appli...

  20. Analysis of some meteorological parameters using artificial neural ...

    African Journals Online (AJOL)

    Analysis of some meteorological parameters using artificial neural network method for ... The mean daily data for sunshine hours, maximum temperature, cloud cover and ... The study used artificial neural networks (ANN) for the estimation.

  1. Emerging trends in neuro engineering and neural computation

    CERN Document Server

    Lee, Kendall; Garmestani, Hamid; Lim, Chee

    2017-01-01

    This book focuses on neuro-engineering and neural computing, a multi-disciplinary field of research attracting considerable attention from engineers, neuroscientists, microbiologists and material scientists. It explores a range of topics concerning the design and development of innovative neural and brain interfacing technologies, as well as novel information acquisition and processing algorithms to make sense of the acquired data. The book also highlights emerging trends and advances regarding the applications of neuro-engineering in real-world scenarios, such as neural prostheses, diagnosis of neural degenerative diseases, deep brain stimulation, biosensors, real neural network-inspired artificial neural networks (ANNs) and the predictive modeling of information flows in neuronal networks. The book is broadly divided into three main sections including: current trends in technological developments, neural computation techniques to make sense of the neural behavioral data, and application of these technologie...

  2. Stability of Neutral Fractional Neural Networks with Delay

    Institute of Scientific and Technical Information of China (English)

    LI Yan; JIANG Wei; HU Bei-bei

    2016-01-01

    This paper studies stability of neutral fractional neural networks with delay. By introducing the definition of norm and using the uniform stability, the sufficient condition for uniform stability of neutral fractional neural networks with delay is obtained.

  3. One weird trick for parallelizing convolutional neural networks

    OpenAIRE

    Krizhevsky, Alex

    2014-01-01

    I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

  4. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer ... N-hexane (HPLC grade) was purchased from. Fisher Scientific. ..... Simultaneous Quantification of Seven Flavonoids in.

  5. Classification of Urinary Calculi using Feed-Forward Neural Networks

    African Journals Online (AJOL)

    NJD

    Genetic algorithms were used for optimization of neural networks and for selection of the ... Urinary calculi, infrared spectroscopy, classification, neural networks, variable ..... note that the best accuracy is obtained for whewellite, weddellite.

  6. Nano-topography Enhances Communication in Neural Cells Networks

    KAUST Repository

    Onesto, V.; Cancedda, L.; Coluccio, M. L.; Nanni, M.; Pesce, M.; Malara, N.; Cesarelli, M.; Di Fabrizio, Enzo M.; Amato, F.; Gentile, F.

    2017-01-01

    Neural cells are the smallest building blocks of the central and peripheral nervous systems. Information in neural networks and cell-substrate interactions have been heretofore studied separately. Understanding whether surface nano-topography can

  7. Infrared neural stimulation (INS) inhibits electrically evoked neural responses in the deaf white cat

    Science.gov (United States)

    Richter, Claus-Peter; Rajguru, Suhrud M.; Robinson, Alan; Young, Hunter K.

    2014-03-01

    Infrared neural stimulation (INS) has been used in the past to evoke neural activity from hearing and partially deaf animals. All the responses were excitatory. In Aplysia californica, Duke and coworkers demonstrated that INS also inhibits neural responses [1], which similar observations were made in the vestibular system [2, 3]. In deaf white cats that have cochleae with largely reduced spiral ganglion neuron counts and a significant degeneration of the organ of Corti, no cochlear compound action potentials could be observed during INS alone. However, the combined electrical and optical stimulation demonstrated inhibitory responses during irradiation with infrared light.

  8. Advanced Applications of Neural Networks and Artificial Intelligence: A Review

    OpenAIRE

    Koushal Kumar; Gour Sundar Mitra Thakur

    2012-01-01

    Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is c...

  9. Insights into neural crest development from studies of avian embryos

    OpenAIRE

    Gandhi, Shashank; Bronner, Marianne E.

    2018-01-01

    The neural crest is a multipotent and highly migratory cell type that contributes to many of the defining features of vertebrates, including the skeleton of the head and most of the peripheral nervous system. 150 years after the discovery of the neural crest, avian embryos remain one of the most important model organisms for studying neural crest development. In this review, we describe aspects of neural crest induction, migration and axial level differences, highlighting what is known about ...

  10. Application of radial basis neural network for state estimation of ...

    African Journals Online (AJOL)

    An original application of radial basis function (RBF) neural network for power system state estimation is proposed in this paper. The property of massive parallelism of neural networks is employed for this. The application of RBF neural network for state estimation is investigated by testing its applicability on a IEEE 14 bus ...

  11. 38 CFR 17.149 - Sensori-neural aids.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Sensori-neural aids. 17... Prosthetic, Sensory, and Rehabilitative Aids § 17.149 Sensori-neural aids. (a) Notwithstanding any other provision of this part, VA will furnish needed sensori-neural aids (i.e., eyeglasses, contact lenses...

  12. Prediction based chaos control via a new neural network

    International Nuclear Information System (INIS)

    Shen Liqun; Wang Mao; Liu Wanyu; Sun Guanghui

    2008-01-01

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network

  13. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a

  14. Tensor Basis Neural Network v. 1.0 (beta)

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-28

    This software package can be used to build, train, and test a neural network machine learning model. The neural network architecture is specifically designed to embed tensor invariance properties by enforcing that the model predictions sit on an invariant tensor basis. This neural network architecture can be used in developing constitutive models for applications such as turbulence modeling, materials science, and electromagnetism.

  15. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used. In this study, we evaluated the performance of these neural networks on three established bench mark time series prediction problems. Results from the experiments showed that Jordan neural network performed significantly ...

  16. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological decomposition of pollutants in the reactor. The neural network has been trained with experimental data ...

  17. Collaborative Recurrent Neural Networks forDynamic Recommender Systems

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:366–381, 2016 ACML 2016 Collaborative Recurrent Neural Networks for Dynamic Recommender Systems Young...an unprece- dented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating...Recurrent Neural Network, Recommender System , Neural Language Model, Collaborative Filtering 1. Introduction As ever larger parts of the population

  18. The principles of artificial neural network information processing

    International Nuclear Information System (INIS)

    Dai, Ru-Wei

    1993-01-01

    In this article, the basic structure of an artificial neuron is first introduced. In addition, principles of artificial neural network as well as several important artificial neural models such as perception, back propagation model, Hopfield net, and ART model are briefly discussed and analyzed. Finally the application of artificial neural network for Chinese character recognition is also given. (author)

  19. Efficient computation in adaptive artificial spiking neural networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); R.B.P. Nusselder (Roeland); H.S. Scholte; S.M. Bohte (Sander)

    2017-01-01

    textabstractArtificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of

  20. The principles of artificial neural network information processing

    International Nuclear Information System (INIS)

    Dai, Ru-Wei

    1993-01-01

    In this article, the basic structure of an artificial neuron is first introduced. In addition, principles of artificial neural network as well as several important artificial neural models such as Perceptron, Back propagation model, Hopfield net, and ART model are briefly discussed and analyzed. Finally, the application of artificial neural network for Chinese Character Recognition is also given. (author)

  1. The gamma model : a new neural network for temporal processing

    NARCIS (Netherlands)

    Vries, de B.

    1992-01-01

    In this paper we develop the gamma neural model, a new neural net architecture for processing of temporal patterns. Time varying patterns are normally segmented into a sequence of static patterns that are successively presented to a neural net. In the approach presented here segmentation is avoided.

  2. Analysis of neural networks in terms of domain functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a

  3. Rhesus monkey neural stem cell transplantation promotes neural regeneration in rats with hippocampal lesions

    Directory of Open Access Journals (Sweden)

    Li-juan Ye

    2016-01-01

    Full Text Available Rhesus monkey neural stem cells are capable of differentiating into neurons and glial cells. Therefore, neural stem cell transplantation can be used to promote functional recovery of the nervous system. Rhesus monkey neural stem cells (1 × 105 cells/μL were injected into bilateral hippocampi of rats with hippocampal lesions. Confocal laser scanning microscopy demonstrated that green fluorescent protein-labeled transplanted cells survived and grew well. Transplanted cells were detected at the lesion site, but also in the nerve fiber-rich region of the cerebral cortex and corpus callosum. Some transplanted cells differentiated into neurons and glial cells clustering along the ventricular wall, and integrated into the recipient brain. Behavioral tests revealed that spatial learning and memory ability improved, indicating that rhesus monkey neural stem cells noticeably improve spatial learning and memory abilities in rats with hippocampal lesions.

  4. Bidirectional neural interface: Closed-loop feedback control for hybrid neural systems.

    Science.gov (United States)

    Chou, Zane; Lim, Jeffrey; Brown, Sophie; Keller, Melissa; Bugbee, Joseph; Broccard, Frédéric D; Khraiche, Massoud L; Silva, Gabriel A; Cauwenberghs, Gert

    2015-01-01

    Closed-loop neural prostheses enable bidirectional communication between the biological and artificial components of a hybrid system. However, a major challenge in this field is the limited understanding of how these components, the two separate neural networks, interact with each other. In this paper, we propose an in vitro model of a closed-loop system that allows for easy experimental testing and modification of both biological and artificial network parameters. The interface closes the system loop in real time by stimulating each network based on recorded activity of the other network, within preset parameters. As a proof of concept we demonstrate that the bidirectional interface is able to establish and control network properties, such as synchrony, in a hybrid system of two neural networks more significantly more effectively than the same system without the interface or with unidirectional alternatives. This success holds promise for the application of closed-loop systems in neural prostheses, brain-machine interfaces, and drug testing.

  5. The effect of the neural activity on topological properties of growing neural networks.

    Science.gov (United States)

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  6. Neural Ranking Models with Weak Supervision

    NARCIS (Netherlands)

    Dehghani, M.; Zamani, H.; Severyn, A.; Kamps, J.; Croft, W.B.

    2017-01-01

    Despite the impressive improvements achieved by unsupervised deep neural networks in computer vision and NLP tasks, such improvements have not yet been observed in ranking for information retrieval. The reason may be the complexity of the ranking problem, as it is not obvious how to learn from

  7. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  8. Applying Gradient Descent in Convolutional Neural Networks

    Science.gov (United States)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  9. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  10. Some Properties of the Assembly Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Húsek, Dušan; Goltsev, A.

    2002-01-01

    Roč. 12, č. 1 (2002), s. 15-32 ISSN 1210-0552 R&D Projects: GA MŠk LN00B096 Keywords : neuron * neural assembly * neuural column subnetwork * generalization * recognition * perceptron * the nearest-neighbor method Subject RIV: BA - General Mathematics

  11. Feedforward Nonlinear Control Using Neural Gas Network

    Directory of Open Access Journals (Sweden)

    Iván Machón-González

    2017-01-01

    Full Text Available Nonlinear systems control is a main issue in control theory. Many developed applications suffer from a mathematical foundation not as general as the theory of linear systems. This paper proposes a control strategy of nonlinear systems with unknown dynamics by means of a set of local linear models obtained by a supervised neural gas network. The proposed approach takes advantage of the neural gas feature by which the algorithm yields a very robust clustering procedure. The direct model of the plant constitutes a piece-wise linear approximation of the nonlinear system and each neuron represents a local linear model for which a linear controller is designed. The neural gas model works as an observer and a controller at the same time. A state feedback control is implemented by estimation of the state variables based on the local transfer function that was provided by the local linear model. The gradient vectors obtained by the supervised neural gas algorithm provide a robust procedure for feedforward nonlinear control, that is, supposing the inexistence of disturbances.

  12. Burst firing enhances neural output correlation

    Directory of Open Access Journals (Sweden)

    Ho Ka eChan

    2016-05-01

    Full Text Available Neurons communicate and transmit information predominantly through spikes. Given that experimentally observed neural spike trains in a variety of brain areas can be highly correlated, it is important to investigate how neurons process correlated inputs. Most previous work in this area studied the problem of correlation transfer analytically by making significant simplifications on neural dynamics. Temporal correlation between inputs that arises from synaptic filtering, for instance, is often ignored when assuming that an input spike can at most generate one output spike. Through numerical simulations of a pair of leaky integrate-and-fire (LIF neurons receiving correlated inputs, we demonstrate that neurons in the presence of synaptic filtering by slow synapses exhibit strong output correlations. We then show that burst firing plays a central role in enhancing output correlations, which can explain the above-mentioned observation because synaptic filtering induces bursting. The observed changes of correlations are mostly on a long time scale. Our results suggest that other features affecting the prevalence of neural burst firing in biological neurons, e.g., adaptive spiking mechanisms, may play an important role in modulating the overall level of correlations in neural networks.

  13. Foreign currency rate forecasting using neural networks

    Science.gov (United States)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  14. A Neural Region of Abstract Working Memory

    Science.gov (United States)

    Cowan, Nelson; Li, Dawei; Moffitt, Amanda; Becker, Theresa M.; Martin, Elizabeth A.; Saults, J. Scott; Christ, Shawn E.

    2011-01-01

    Over 350 years ago, Descartes proposed that the neural basis of consciousness must be a brain region in which sensory inputs are combined. Using fMRI, we identified at least one such area for working memory, the limited information held in mind, described by William James as the trailing edge of consciousness. Specifically, a region in the left…

  15. Artificial Astrocytes Improve Neural Network Performance

    Science.gov (United States)

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  16. Radioactive fallout and neural tube defects

    African Journals Online (AJOL)

    Nejat Akar

    2015-07-10

    Jul 10, 2015 ... It is a prenatal failure of the embryonic neural tube to close over the ... and the ability of radioisotopes to attach to cells, tissues, and ... The Egyptian Journal of Medical Human Genetics .... Stem Cells 1997;15(Suppl 2):255–60.

  17. IMPLEMENTATION OF NEURAL - CRYPTOGRAPHIC SYSTEM USING FPGA

    Directory of Open Access Journals (Sweden)

    KARAM M. Z. OTHMAN

    2011-08-01

    Full Text Available Modern cryptography techniques are virtually unbreakable. As the Internet and other forms of electronic communication become more prevalent, electronic security is becoming increasingly important. Cryptography is used to protect e-mail messages, credit card information, and corporate data. The design of the cryptography system is a conventional cryptography that uses one key for encryption and decryption process. The chosen cryptography algorithm is stream cipher algorithm that encrypt one bit at a time. The central problem in the stream-cipher cryptography is the difficulty of generating a long unpredictable sequence of binary signals from short and random key. Pseudo random number generators (PRNG have been widely used to construct this key sequence. The pseudo random number generator was designed using the Artificial Neural Networks (ANN. The Artificial Neural Networks (ANN providing the required nonlinearity properties that increases the randomness statistical properties of the pseudo random generator. The learning algorithm of this neural network is backpropagation learning algorithm. The learning process was done by software program in Matlab (software implementation to get the efficient weights. Then, the learned neural network was implemented using field programmable gate array (FPGA.

  18. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  19. Neural and Behavioral Correlates of Song Prosody

    Science.gov (United States)

    Gordon, Reyna Leigh

    2010-01-01

    This dissertation studies the neural basis of song, a universal human behavior. The relationship of words and melodies in the perception of song at phonological, semantic, melodic, and rhythmic levels of processing was investigated using the fine temporal resolution of Electroencephalography (EEG). The observations reported here may shed light on…

  20. Neural networks to predict exosphere temperature corrections

    Science.gov (United States)

    Choury, Anna; Bruinsma, Sean; Schaeffer, Philippe

    2013-10-01

    Precise orbit prediction requires a forecast of the atmospheric drag force with a high degree of accuracy. Artificial neural networks are universal approximators derived from artificial intelligence and are widely used for prediction. This paper presents a method of artificial neural networking for prediction of the thermosphere density by forecasting exospheric temperature, which will be used by the semiempirical thermosphere Drag Temperature Model (DTM) currently developed. Artificial neural network has shown to be an effective and robust forecasting model for temperature prediction. The proposed model can be used for any mission from which temperature can be deduced accurately, i.e., it does not require specific training. Although the primary goal of the study was to create a model for 1 day ahead forecast, the proposed architecture has been generalized to 2 and 3 days prediction as well. The impact of artificial neural network predictions has been quantified for the low-orbiting satellite Gravity Field and Steady-State Ocean Circulation Explorer in 2011, and an order of magnitude smaller orbit errors were found when compared with orbits propagated using the thermosphere model DTM2009.

  1. Deformable image registration using convolutional neural networks

    NARCIS (Netherlands)

    Eppenhof, Koen A.J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P.W.

    2018-01-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between

  2. Energy Complexity of Recurrent Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří

    2014-01-01

    Roč. 26, č. 5 (2014), s. 953-973 ISSN 0899-7667 R&D Projects: GA ČR GAP202/10/1333 Institutional support: RVO:67985807 Keywords : neural network * finite automaton * energy complexity * optimal size Subject RIV: IN - Informatics, Computer Science Impact factor: 2.207, year: 2014

  3. Continual Learning through Evolvable Neural Turing Machines

    DEFF Research Database (Denmark)

    Lüders, Benno; Schläger, Mikkel; Risi, Sebastian

    2016-01-01

    Continual learning, i.e. the ability to sequentially learn tasks without catastrophic forgetting of previously learned ones, is an important open challenge in machine learning. In this paper we take a step in this direction by showing that the recently proposed Evolving Neural Turing Machine (ENTM...

  4. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated...

  5. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  6. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  7. Convolutional Neural Networks - Generalizability and Interpretations

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David

    from data despite it being limited in amount or context representation. Within Machine Learning this thesis focuses on Convolutional Neural Networks for Computer Vision. The research aims to answer how to explore a model's generalizability to the whole population of data samples and how to interpret...

  8. Neural Networks for protein Structure Prediction

    DEFF Research Database (Denmark)

    Bohr, Henrik

    1998-01-01

    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  9. Visualization of neural networks using saliency maps

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.; Kjems, Ulrik; Hansen, Lars Kai

    1995-01-01

    The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the ill-posed case, where the dimensionality of the input-field by far exceeds the number of examples. Several levels of approximations...

  10. A fully implantable rodent neural stimulator

    Science.gov (United States)

    Perry, D. W. J.; Grayden, D. B.; Shepherd, R. K.; Fallon, J. B.

    2012-02-01

    The ability to electrically stimulate neural and other excitable tissues in behaving experimental animals is invaluable for both the development of neural prostheses and basic neurological research. We developed a fully implantable neural stimulator that is able to deliver two channels of intra-cochlear electrical stimulation in the rat. It is powered via a novel omni-directional inductive link and includes an on-board microcontroller with integrated radio link, programmable current sources and switching circuitry to generate charge-balanced biphasic stimulation. We tested the implant in vivo and were able to elicit both neural and behavioural responses. The implants continued to function for up to five months in vivo. While targeted to cochlear stimulation, with appropriate electrode arrays the stimulator is well suited to stimulating other neurons within the peripheral or central nervous systems. Moreover, it includes significant on-board data acquisition and processing capabilities, which could potentially make it a useful platform for telemetry applications, where there is a need to chronically monitor physiological variables in unrestrained animals.

  11. Fast Fingerprint Classification with Deep Neural Network

    DEFF Research Database (Denmark)

    Michelsanti, Daniel; Guichi, Yanis; Ene, Andreea-Daniela

    2018-01-01

    . In this work we evaluate the performance of two pre-trained convolutional neural networks fine-tuned on the NIST SD4 benchmark database. The obtained results show that this approach is comparable with other results in the literature, with the advantage of a fast feature extraction stage....

  12. Neural understanding of low-resolution images

    NARCIS (Netherlands)

    Spaanenburg, L; DeGraaf, J; Nijhuis, JAG; Stevens, [No Value; Wichers, W

    1998-01-01

    Neural networks can be applied for a number of innovative applications in a production environment, ranging from security & safety in the environmental conditions to the product control & diagnosis. For visual monitoring the use of low-resolution images is promising to bridge the time elapse between

  13. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    This parameter is taken as the threshold of neuron for learning of neural network. This algorithm is tested with three benchmark datasets and ... Author Affiliations. OM PRAKASH PATEL1 ARUNA TIWARI. Department of Computer Science and Engineering, Indian Institute of Technology Indore, Indore 453552, India ...

  14. Nonlinear Time Series Analysis via Neural Networks

    Science.gov (United States)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  15. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  16. Application of neural networks in experimental physics

    International Nuclear Information System (INIS)

    Kisel', I.V.; Neskromnyj, V.N.; Ososkov, G.A.

    1993-01-01

    The theoretical foundations of numerous models of artificial neural networks (ANN) and their applications to the actual problems of associative memory, optimization and pattern recognition are given. This review contains also numerous using of ANN in the experimental physics both as the hardware realization of fast triggering systems for even selection and for the following software implementation of the trajectory data recognition

  17. Integrated Neural Flight and Propulsion Control System

    Science.gov (United States)

    Kaneshige, John; Gundy-Burlet, Karen; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes an integrated neural flight and propulsion control system. which uses a neural network based approach for applying alternate sources of control power in the presence of damage or failures. Under normal operating conditions, the system utilizes conventional flight control surfaces. Neural networks are used to provide consistent handling qualities across flight conditions and for different aircraft configurations. Under damage or failure conditions, the system may utilize unconventional flight control surface allocations, along with integrated propulsion control, when additional control power is necessary for achieving desired flight control performance. In this case, neural networks are used to adapt to changes in aircraft dynamics and control allocation schemes. Of significant importance here is the fact that this system can operate without emergency or backup flight control mode operations. An additional advantage is that this system can utilize, but does not require, fault detection and isolation information or explicit parameter identification. Piloted simulation studies were performed on a commercial transport aircraft simulator. Subjects included both NASA test pilots and commercial airline crews. Results demonstrate the potential for improving handing qualities and significantly increasing survivability rates under various simulated failure conditions.

  18. Integrating neural network technology and noise analysis

    International Nuclear Information System (INIS)

    Uhrig, R.E.; Oak Ridge National Lab., TN

    1995-01-01

    The integrated use of neural network and noise analysis technologies offers advantages not available by the use of either technology alone. The application of neural network technology to noise analysis offers an opportunity to expand the scope of problems where noise analysis is useful and unique ways in which the integration of these technologies can be used productively. The two-sensor technique, in which the responses of two sensors to an unknown driving source are related, is used to demonstration such integration. The relationship between power spectral densities (PSDs) of accelerometer signals is derived theoretically using noise analysis to demonstrate its uniqueness. This relationship is modeled from experimental data using a neural network when the system is working properly, and the actual PSD of one sensor is compared with the PSD of that sensor predicted by the neural network using the PSD of the other sensor as an input. A significant deviation between the actual and predicted PSDs indicate that system is changing (i.e., failing). Experiments carried out on check values and bearings illustrate the usefulness of the methodology developed. (Author)

  19. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  20. Neural network segmentation of magnetic resonance images

    International Nuclear Information System (INIS)

    Frederick, B.

    1990-01-01

    Neural networks are well adapted to the task of grouping input patterns into subsets which share some similarity. Moreover, once trained, they can generalize their classification rules to classify new data sets. Sets of pixel intensities from magnetic resonance (MR) images provide a natural input to a neural network; by varying imaging parameters, MR images can reflect various independent physical parameters of tissues in their pixel intensities. A neural net can then be trained to classify physically similar tissue types based on sets of pixel intensities resulting from different imaging studies on the same subject. This paper reports that a neural network classifier for image segmentation was implanted on a Sun 4/60, and was tested on the task of classifying tissues of canine head MR images. Four images of a transaxial slice with different imaging sequences were taken as input to the network (three spin-echo images and an inversion recovery image). The training set consisted of 691 representative samples of gray matter, white matter, cerebrospinal fluid, bone, and muscle preclassified by a neuroscientist. The network was trained using a fast backpropagation algorithm to derive the decision criteria to classify any location in the image by its pixel intensities, and the image was subsequently segmented by the classifier

  1. Neural networks in continuous optical media

    International Nuclear Information System (INIS)

    Anderson, D.Z.

    1987-01-01

    The authors' interest is to see to what extent neural models can be implemented using continuous optical elements. Thus these optical networks represent a continuous distribution of neuronlike processors rather than a discrete collection. Most neural models have three characteristic features: interconnections; adaptivity; and nonlinearity. In their optical representation the interconnections are implemented with linear one- and two-port optical elements such as lenses and holograms. Real-time holographic media allow these interconnections to become adaptive. The nonlinearity is achieved with gain, for example, from two-beam coupling in photorefractive media or a pumped dye medium. Using these basic optical elements one can in principle construct continuous representations of a number of neural network models. The authors demonstrated two devices based on continuous optical elements: an associative memory which recalls an entire object when addressed with a partial object and a tracking novelty filter which identifies time-dependent features in an optical scene. These devices demonstrate the potential of distributed optical elements to implement more formal models of neural networks

  2. Neural Chaos and Free Will Problem

    Czech Academy of Sciences Publication Activity Database

    Andrey, Ladislav

    1997-01-01

    Roč. 4, č. 1 (1997), s. 23 ISSN 1355-8250. [The Brain and Self Workshop: Toward a Science of Consciousness. Elsinore, 21.08.1997-24.08.1997] R&D Projects: GA ČR GA201/95/0992 Keywords : free will and agency * attention * emotion * neural networks and connectionism * nonlinear dynamics

  3. Image Encryption and Chaotic Cellular Neural Network

    Science.gov (United States)

    Peng, Jun; Zhang, Du

    Machine learning has been playing an increasingly important role in information security and assurance. One of the areas of new applications is to design cryptographic systems by using chaotic neural network due to the fact that chaotic systems have several appealing features for information security applications. In this chapter, we describe a novel image encryption algorithm that is based on a chaotic cellular neural network. We start by giving an introduction to the concept of image encryption and its main technologies, and an overview of the chaotic cellular neural network. We then discuss the proposed image encryption algorithm in details, which is followed by a number of security analyses (key space analysis, sensitivity analysis, information entropy analysis and statistical analysis). The comparison with the most recently reported chaos-based image encryption algorithms indicates that the algorithm proposed in this chapter has a better security performance. Finally, we conclude the chapter with possible future work and application prospects of the chaotic cellular neural network in other information assurance and security areas.

  4. Based on BP Neural Network Stock Prediction

    Science.gov (United States)

    Liu, Xiangwei; Ma, Xin

    2012-01-01

    The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…

  5. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  6. Neutron spectrometry with artificial neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A.; Iniguez de la Torre Bayo, M.P.; Barquero, R.; Arteaga A, T.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the χ 2 -test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  7. Artificial neural networks in neutron dosimetry

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A.; Gallego, E.; Lorente, A.

    2005-01-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the χ 2 - test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  8. Separable explanations of neural network decisions

    DEFF Research Database (Denmark)

    Rieger, Laura

    2017-01-01

    Deep Taylor Decomposition is a method used to explain neural network decisions. When applying this method to non-dominant classifications, the resulting explanation does not reflect important features for the chosen classification. We propose that this is caused by the dense layers and propose...

  9. Neural Correlates of Affective Influence on Choice

    Science.gov (United States)

    Piech, Richard M.; Lewis, Jade; Parkinson, Caroline H.; Owen, Adrian M.; Roberts, Angela C.; Downing, Paul E.; Parkinson, John A.

    2010-01-01

    Making the right choice depends crucially on the accurate valuation of the available options in the light of current needs and goals of an individual. Thus, the valuation of identical options can vary considerably with motivational context. The present study investigated the neural structures underlying context dependent evaluation. We instructed…

  10. Neural Control of the Lower Urinary Tract

    Science.gov (United States)

    de Groat, William C.; Griffiths, Derek; Yoshimura, Naoki

    2015-01-01

    This article summarizes anatomical, neurophysiological, pharmacological, and brain imaging studies in humans and animals that have provided insights into the neural circuitry and neurotransmitter mechanisms controlling the lower urinary tract. The functions of the lower urinary tract to store and periodically eliminate urine are regulated by a complex neural control system in the brain, spinal cord, and peripheral autonomic ganglia that coordinates the activity of smooth and striated muscles of the bladder and urethral outlet. The neural control of micturition is organized as a hierarchical system in which spinal storage mechanisms are in turn regulated by circuitry in the rostral brain stem that initiates reflex voiding. Input from the forebrain triggers voluntary voiding by modulating the brain stem circuitry. Many neural circuits controlling the lower urinary tract exhibit switch-like patterns of activity that turn on and off in an all-or-none manner. The major component of the micturition switching circuit is a spinobulbospinal parasympathetic reflex pathway that has essential connections in the periaqueductal gray and pontine micturition center. A computer model of this circuit that mimics the switching functions of the bladder and urethra at the onset of micturition is described. Micturition occurs involuntarily in infants and young children until the age of 3 to 5 years, after which it is regulated voluntarily. Diseases or injuries of the nervous system in adults can cause the re-emergence of involuntary micturition, leading to urinary incontinence. Neuroplasticity underlying these developmental and pathological changes in voiding function is discussed. PMID:25589273

  11. Vibration monitoring with artificial neural networks

    International Nuclear Information System (INIS)

    Alguindigue, I.

    1991-01-01

    Vibration monitoring of components in nuclear power plants has been used for a number of years. This technique involves the analysis of vibration data coming from vital components of the plant to detect features which reflect the operational state of machinery. The analysis leads to the identification of potential failures and their causes, and makes it possible to perform efficient preventive maintenance. Earlydetection is important because it can decrease the probability of catastrophic failures, reduce forced outgage, maximize utilization of available assets, increase the life of the plant, and reduce maintenance costs. This paper documents our work on the design of a vibration monitoring methodology based on neural network technology. This technology provides an attractive complement to traditional vibration analysis because of the potential of neural network to operate in real-time mode and to handle data which may be distorted or noisy. Our efforts have been concentrated on the analysis and classification of vibration signatures collected from operating machinery. Two neural networks algorithms were used in our project: the Recirculation algorithm for data compression and the Backpropagation algorithm to perform the actual classification of the patterns. Although this project is in the early stages of development it indicates that neural networks may provide a viable methodology for monitoring and diagnostics of vibrating components. Our results to date are very encouraging

  12. Towards semen quality assessment using neural networks

    DEFF Research Database (Denmark)

    Linneberg, Christian; Salamon, P.; Svarer, C.

    1994-01-01

    The paper presents the methodology and results from a neural net based classification of human sperm head morphology. The methodology uses a preprocessing scheme in which invariant Fourier descriptors are lumped into “energy” bands. The resulting networks are pruned using optimal brain damage. Pe...

  13. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron ..... Engelbrecht A P, Cloete I, Geldenhuys J, Zurada J M 1995 Automatic scaling using gamma learning for feedforward neural networks. From natural to artificial computing.

  14. Neutron spectrometry using artificial neural networks

    International Nuclear Information System (INIS)

    Vega-Carrillo, Hector Rene; Martin Hernandez-Davila, Victor; Manzanares-Acuna, Eduardo; Mercado Sanchez, Gema A.; Pilar Iniguez de la Torre, Maria; Barquero, Raquel; Palacios, Francisco; Mendez Villafane, Roberto; Arteaga Arteaga, Tarcicio; Manuel Ortiz Rodriguez, Jose

    2006-01-01

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab ( R) program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem

  15. Learning drifting concepts with neural networks

    NARCIS (Netherlands)

    Biehl, Michael; Schwarze, Holm

    1993-01-01

    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using

  16. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  17. Improved transformer protection using probabilistic neural network ...

    African Journals Online (AJOL)

    This article presents a novel technique to distinguish between magnetizing inrush current and internal fault current of power transformer. An algorithm has been developed around the theme of the conventional differential protection method in which parallel combination of Probabilistic Neural Network (PNN) and Power ...

  18. Artificial astrocytes improve neural network performance.

    Directory of Open Access Journals (Sweden)

    Ana B Porto-Pazos

    Full Text Available Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN and artificial neuron-glia networks (NGN to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  19. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  20. Tinnitus and neural plasticity of the brain

    NARCIS (Netherlands)

    Bartels, Hilke; Staal, Michiel J.; Albers, Frans W. J.

    Objective: To describe the current ideas about the manifestations of neural plasticity in generating tinnitus. Data Sources: Recently published source articles were identified using MEDLINE, PubMed, and Cochrane Library according to the key words mentioned below. Study Selection: Review articles and

  1. Neural correlates of HIV risk feelings.

    Science.gov (United States)

    Häcker, Frank E K; Schmälzle, Ralf; Renner, Britta; Schupp, Harald T

    2015-04-01

    Field studies on HIV risk perception suggest that people rely on impressions they have about the safety of their partner. The present fMRI study investigated the neural correlates of the intuitive perception of risk. First, during an implicit condition, participants viewed a series of unacquainted persons and performed a task unrelated to HIV risk. In the following explicit condition, participants evaluated the HIV risk for each presented person. Contrasting responses for high and low HIV risk revealed that risky stimuli evoked enhanced activity in the anterior insula and medial prefrontal regions, which are involved in salience processing and frequently activated by threatening and negative affect-related stimuli. Importantly, neural regions responding to explicit HIV risk judgments were also enhanced in the implicit condition, suggesting a neural mechanism for intuitive impressions of riskiness. Overall, these findings suggest the saliency network as neural correlate for the intuitive sensing of risk. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  2. WATER DEMAND PREDICTION USING ARTIFICIAL NEURAL ...

    African Journals Online (AJOL)

    This paper presents Hourly water demand prediction at the demand nodes of a water distribution network using NeuNet Pro 2.3 neural network software and the monitoring and control of water distribution using supervisory control. The case study is the Laminga Water Treatment Plant and its water distribution network, Jos.

  3. Neural modeling of prefrontal executive function

    Energy Technology Data Exchange (ETDEWEB)

    Levine, D.S. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    Brain executive function is based in a distributed system whereby prefrontal cortex is interconnected with other cortical. and subcortical loci. Executive function is divided roughly into three interacting parts: affective guidance of responses; linkage among working memory representations; and forming complex behavioral schemata. Neural network models of each of these parts are reviewed and fit into a preliminary theoretical framework.

  4. Stability and Adaptation of Neural Networks

    Science.gov (United States)

    1990-11-02

    RICE CODE 17. SECURITY CLASSIFICATION 18. SECURI ’(CLASSIFICATION 19. SECURITY CLASSIFICATION 20. LIMITATION OF OF REPORT OF REP( RT OF REPORT...Recognition," Proc. European Conference on neural Netowrks , Prague, Czechoslovakia, September 1990. 3.0 NEXT-YEAR RESEARCH OBJECTIVES In the third

  5. Rodent Zic Genes in Neural Network Wiring.

    Science.gov (United States)

    Herrera, Eloísa

    2018-01-01

    The formation of the nervous system is a multistep process that yields a mature brain. Failure in any of the steps of this process may cause brain malfunction. In the early stages of embryonic development, neural progenitors quickly proliferate and then, at a specific moment, differentiate into neurons or glia. Once they become postmitotic neurons, they migrate to their final destinations and begin to extend their axons to connect with other neurons, sometimes located in quite distant regions, to establish different neural circuits. During the last decade, it has become evident that Zic genes, in addition to playing important roles in early development (e.g., gastrulation and neural tube closure), are involved in different processes of late brain development, such as neuronal migration, axon guidance, and refinement of axon terminals. ZIC proteins are therefore essential for the proper wiring and connectivity of the brain. In this chapter, we review our current knowledge of the role of Zic genes in the late stages of neural circuit formation.

  6. A locality aware convolutional neural networks accelerator

    NARCIS (Netherlands)

    Shi, R.; Xu, Z.; Sun, Z.; Peemen, M.C.J.; Li, A.; Corporaal, H.; Wu, D.

    2015-01-01

    The advantages of Convolutional Neural Networks (CNNs) with respect to traditional methods for visual pattern recognition have changed the field of machine vision. The main issue that hinders broad adoption of this technique is the massive computing workload in CNN that prevents real-time

  7. Improving Neural Recording Technology at the Nanoscale

    Science.gov (United States)

    Ferguson, John Eric

    Neural recording electrodes are widely used to study normal brain function (e.g., learning, memory, and sensation) and abnormal brain function (e.g., epilepsy, addiction, and depression) and to interface with the nervous system for neuroprosthetics. With a deep understanding of the electrode interface at the nanoscale and the use of novel nanofabrication processes, neural recording electrodes can be designed that surpass previous limits and enable new applications. In this thesis, I will discuss three projects. In the first project, we created an ultralow-impedance electrode coating by controlling the nanoscale texture of electrode surfaces. In the second project, we developed a novel nanowire electrode for long-term intracellular recordings. In the third project, we created a means of wirelessly communicating with ultra-miniature, implantable neural recording devices. The techniques developed for these projects offer significant improvements in the quality of neural recordings. They can also open the door to new types of experiments and medical devices, which can lead to a better understanding of the brain and can enable novel and improved tools for clinical applications.

  8. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  9. Construction of a Piezoresistive Neural Sensor Array

    Science.gov (United States)

    Carlson, W. B.; Schulze, W. A.; Pilgrim, P. M.

    1996-01-01

    The construction of a piezoresistive - piezoelectric sensor (or actuator) array is proposed using 'neural' connectivity for signal recognition and possible actuation functions. A closer integration of the sensor and decision functions is necessary in order to achieve intrinsic identification within the sensor. A neural sensor is the next logical step in development of truly 'intelligent' arrays. This proposal will integrate 1-3 polymer piezoresistors and MLC electroceramic devices for applications involving acoustic identification. The 'intelligent' piezoresistor -piezoelectric system incorporates printed resistors, composite resistors, and a feedback for the resetting of resistances. A model of a design is proposed in order to simulate electromechanical resistor interactions. The goal of optimizing a sensor geometry for improving device reliability, training, & signal identification capabilities is the goal of this work. At present, studies predict performance of a 'smart' device with a significant control of 'effective' compliance over a narrow pressure range due to a piezoresistor percolation threshold. An interesting possibility may be to use an array of control elements to shift the threshold function in order to change the level of resistance in a neural sensor array for identification, or, actuation applications. The proposed design employs elements of: (1) conductor loaded polymers for a 'fast' RC time constant response; and (2) multilayer ceramics for actuation or sensing and shifting of resistance in the polymer. Other material possibilities also exist using magnetoresistive layered systems for shifting the resistance. It is proposed to use a neural net configuration to test and to help study the possible changes required in the materials design of these devices. Numerical design models utilize electromechanical elements, in conjunction with structural elements in order to simulate piezoresistively controlled actuators and changes in resistance of sensors

  10. Neural network error correction for solving coupled ordinary differential equations

    Science.gov (United States)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  11. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  12. Neural computation and the computational theory of cognition.

    Science.gov (United States)

    Piccinini, Gualtiero; Bahar, Sonya

    2013-04-01

    We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism-neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation. Copyright © 2012 Cognitive Science Society, Inc.

  13. Neural crest cells: from developmental biology to clinical interventions.

    Science.gov (United States)

    Noisa, Parinya; Raivio, Taneli

    2014-09-01

    Neural crest cells are multipotent cells, which are specified in embryonic ectoderm in the border of neural plate and epiderm during early development by interconnection of extrinsic stimuli and intrinsic factors. Neural crest cells are capable of differentiating into various somatic cell types, including melanocytes, craniofacial cartilage and bone, smooth muscle, and peripheral nervous cells, which supports their promise for cell therapy. In this work, we provide a comprehensive review of wide aspects of neural crest cells from their developmental biology to applicability in medical research. We provide a simplified model of neural crest cell development and highlight the key external stimuli and intrinsic regulators that determine the neural crest cell fate. Defects of neural crest cell development leading to several human disorders are also mentioned, with the emphasis of using human induced pluripotent stem cells to model neurocristopathic syndromes. © 2014 Wiley Periodicals, Inc.

  14. Introduction to spiking neural networks: Information processing, learning and applications.

    Science.gov (United States)

    Ponulak, Filip; Kasinski, Andrzej

    2011-01-01

    The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.

  15. Differentiation state determines neural effects on microvascular endothelial cells

    International Nuclear Information System (INIS)

    Muffley, Lara A.; Pan, Shin-Chen; Smith, Andria N.; Ga, Maricar; Hocking, Anne M.; Gibran, Nicole S.

    2012-01-01

    Growing evidence indicates that nerves and capillaries interact paracrinely in uninjured skin and cutaneous wounds. Although mature neurons are the predominant neural cell in the skin, neural progenitor cells have also been detected in uninjured adult skin. The aim of this study was to characterize differential paracrine effects of neural progenitor cells and mature sensory neurons on dermal microvascular endothelial cells. Our results suggest that neural progenitor cells and mature sensory neurons have unique secretory profiles and distinct effects on dermal microvascular endothelial cell proliferation, migration, and nitric oxide production. Neural progenitor cells and dorsal root ganglion neurons secrete different proteins related to angiogenesis. Specific to neural progenitor cells were dipeptidyl peptidase-4, IGFBP-2, pentraxin-3, serpin f1, TIMP-1, TIMP-4 and VEGF. In contrast, endostatin, FGF-1, MCP-1 and thrombospondin-2 were specific to dorsal root ganglion neurons. Microvascular endothelial cell proliferation was inhibited by dorsal root ganglion neurons but unaffected by neural progenitor cells. In contrast, microvascular endothelial cell migration in a scratch wound assay was inhibited by neural progenitor cells and unaffected by dorsal root ganglion neurons. In addition, nitric oxide production by microvascular endothelial cells was increased by dorsal root ganglion neurons but unaffected by neural progenitor cells. -- Highlights: ► Dorsal root ganglion neurons, not neural progenitor cells, regulate microvascular endothelial cell proliferation. ► Neural progenitor cells, not dorsal root ganglion neurons, regulate microvascular endothelial cell migration. ► Neural progenitor cells and dorsal root ganglion neurons do not effect microvascular endothelial tube formation. ► Dorsal root ganglion neurons, not neural progenitor cells, regulate microvascular endothelial cell production of nitric oxide. ► Neural progenitor cells and dorsal root

  16. Temporal-pattern learning in neural models

    CERN Document Server

    Genís, Carme Torras

    1985-01-01

    While the ability of animals to learn rhythms is an unquestionable fact, the underlying neurophysiological mechanisms are still no more than conjectures. This monograph explores the requirements of such mechanisms, reviews those previously proposed and postulates a new one based on a direct electric coding of stimulation frequencies. Experi­ mental support for the option taken is provided both at the single neuron and neural network levels. More specifically, the material presented divides naturally into four parts: a description of the experimental and theoretical framework where this work becomes meaningful (Chapter 2), a detailed specifica­ tion of the pacemaker neuron model proposed together with its valida­ tion through simulation (Chapter 3), an analytic study of the behavior of this model when submitted to rhythmic stimulation (Chapter 4) and a description of the neural network model proposed for learning, together with an analysis of the simulation results obtained when varying seve­ ral factors r...

  17. Convergent dynamics for multistable delayed neural networks

    International Nuclear Information System (INIS)

    Shih, Chih-Wen; Tseng, Jui-Pin

    2008-01-01

    This investigation aims at developing a methodology to establish convergence of dynamics for delayed neural network systems with multiple stable equilibria. The present approach is general and can be applied to several network models. We take the Hopfield-type neural networks with both instantaneous and delayed feedbacks to illustrate the idea. We shall construct the complete dynamical scenario which comprises exactly 2 n stable equilibria and exactly (3 n − 2 n ) unstable equilibria for the n-neuron network. In addition, it is shown that every solution of the system converges to one of the equilibria as time tends to infinity. The approach is based on employing the geometrical structure of the network system. Positively invariant sets and componentwise dynamical properties are derived under the geometrical configuration. An iteration scheme is subsequently designed to confirm the convergence of dynamics for the system. Two examples with numerical simulations are arranged to illustrate the present theory

  18. Noise Analysis studies with neural networks

    International Nuclear Information System (INIS)

    Seker, S.; Ciftcioglu, O.

    1996-01-01

    Noise analysis studies with neural network are aimed. Stochastic signals at the input of the network are used to obtain an algorithmic multivariate stochastic signal modeling. To this end, lattice modeling of a stochastic signal is performed to obtain backward residual noise sources which are uncorrelated among themselves. There are applied together with an additional input to the network to obtain an algorithmic model which is used for signal detection for early failure in plant monitoring. The additional input provides the information to the network to minimize the difference between the signal and the network's one-step-ahead prediction. A stochastic algorithm is used for training where the errors reflecting the measurement error during the training are also modelled so that fast and consistent convergence of network's weights is obtained. The lattice structure coupled to neural network investigated with measured signals from an actual power plant. (authors)

  19. Computational chaos in massively parallel neural networks

    Science.gov (United States)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  20. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential...

  1. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  2. Evaluating neural networks and artificial intelligence systems

    Science.gov (United States)

    Alberts, David S.

    1994-02-01

    Systems have no intrinsic value in and of themselves, but rather derive value from the contributions they make to the missions, decisions, and tasks they are intended to support. The estimation of the cost-effectiveness of systems is a prerequisite for rational planning, budgeting, and investment documents. Neural network and expert system applications, although similar in their incorporation of a significant amount of decision-making capability, differ from each other in ways that affect the manner in which they can be evaluated. Both these types of systems are, by definition, evolutionary systems, which also impacts their evaluation. This paper discusses key aspects of neural network and expert system applications and their impact on the evaluation process. A practical approach or methodology for evaluating a certain class of expert systems that are particularly difficult to measure using traditional evaluation approaches is presented.

  3. Investment Valuation Analysis with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Hüseyin İNCE

    2017-07-01

    Full Text Available This paper shows that discounted cash flow and net present value, which are traditional investment valuation models, can be combined with artificial neural network model forecasting. The main inputs for the valuation models, such as revenue, costs, capital expenditure, and their growth rates, are heavily related to sector dynamics and macroeconomics. The growth rates of those inputs are related to inflation and exchange rates. Therefore, predicting inflation and exchange rates is a critical issue for the valuation output. In this paper, the Turkish economy’s inflation rate and the exchange rate of USD/TRY are forecast by artificial neural networks and implemented to the discounted cash flow model. Finally, the results are benchmarked with conventional practices.

  4. Accident scenario diagnostics with neural networks

    International Nuclear Information System (INIS)

    Guo, Z.

    1992-01-01

    Nuclear power plants are very complex systems. The diagnoses of transients or accident conditions is very difficult because a large amount of information, which is often noisy, or intermittent, or even incomplete, need to be processed in real time. To demonstrate their potential application to nuclear power plants, neural networks axe used to monitor the accident scenarios simulated by the training simulator of TVA's Watts Bar Nuclear Power Plant. A self-organization network is used to compress original data to reduce the total number of training patterns. Different accident scenarios are closely related to different key parameters which distinguish one accident scenario from another. Therefore, the accident scenarios can be monitored by a set of small size neural networks, called modular networks, each one of which monitors only one assigned accident scenario, to obtain fast training and recall. Sensitivity analysis is applied to select proper input variables for modular networks

  5. Spiking neural P systems with multiple channels.

    Science.gov (United States)

    Peng, Hong; Yang, Jinyu; Wang, Jun; Wang, Tao; Sun, Zhang; Song, Xiaoxiao; Luo, Xiaohui; Huang, Xiangnian

    2017-11-01

    Spiking neural P systems (SNP systems, in short) are a class of distributed parallel computing systems inspired from the neurophysiological behavior of biological spiking neurons. In this paper, we investigate a new variant of SNP systems in which each neuron has one or more synaptic channels, called spiking neural P systems with multiple channels (SNP-MC systems, in short). The spiking rules with channel label are introduced to handle the firing mechanism of neurons, where the channel labels indicate synaptic channels of transmitting the generated spikes. The computation power of SNP-MC systems is investigated. Specifically, we prove that SNP-MC systems are Turing universal as both number generating and number accepting devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. CONSTRUCTION COST PREDICTION USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Smita K Magdum

    2017-10-01

    Full Text Available Construction cost prediction is important for construction firms to compete and grow in the industry. Accurate construction cost prediction in the early stage of project is important for project feasibility studies and successful completion. There are many factors that affect the cost prediction. This paper presents construction cost prediction as multiple regression model with cost of six materials as independent variables. The objective of this paper is to develop neural networks and multilayer perceptron based model for construction cost prediction. Different models of NN and MLP are developed with varying hidden layer size and hidden nodes. Four artificial neural network models and twelve multilayer perceptron models are compared. MLP and NN give better results than statistical regression method. As compared to NN, MLP works better on training dataset but fails on testing dataset. Five activation functions are tested to identify suitable function for the problem. ‘elu' transfer function gives better results than other transfer function.

  7. Gas Classification Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  8. A canonical neural mechanism for behavioral variability

    Science.gov (United States)

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-05-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5-6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these `universal' statistics.

  9. Wavelet neural network load frequency controller

    International Nuclear Information System (INIS)

    Hemeida, Ashraf Mohamed

    2005-01-01

    This paper presents the feasibility of applying a wavelet neural network (WNN) approach for the load frequency controller (LFC) to damp the frequency oscillations of two area power systems due to load disturbances. The present intelligent control system trained the wavelet neural network (WNN) controller on line with adaptive learning rates, which are derived in the sense of a discrete type Lyapunov stability theorem. The present WNN controller is designed individually for each area. The proposed technique is applied successfully for a wide range of operating conditions. The time simulation results indicate its superiority and effectiveness over the conventional approach. The effects of consideration of the governor dead zone on the system performance are studied using the proposed controller and the conventional one

  10. Neural Mechanisms of Selective Visual Attention.

    Science.gov (United States)

    Moore, Tirin; Zirnsak, Marc

    2017-01-03

    Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.

  11. A class of convergent neural network dynamics

    Science.gov (United States)

    Fiedler, Bernold; Gedeon, Tomáš

    1998-01-01

    We consider a class of systems of differential equations in Rn which exhibits convergent dynamics. We find a Lyapunov function and show that every bounded trajectory converges to the set of equilibria. Our result generalizes the results of Cohen and Grossberg (1983) for convergent neural networks. It replaces the symmetry assumption on the matrix of weights by the assumption on the structure of the connections in the neural network. We prove the convergence result also for a large class of Lotka-Volterra systems. These are naturally defined on the closed positive orthant. We show that there are no heteroclinic cycles on the boundary of the positive orthant for the systems in this class.

  12. Neural decoding of visual imagery during sleep.

    Science.gov (United States)

    Horikawa, T; Tamaki, M; Miyawaki, Y; Kamitani, Y

    2013-05-03

    Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.

  13. Gas Classification Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  14. Iris Data Classification Using Quantum Neural Networks

    International Nuclear Information System (INIS)

    Sahni, Vishal; Patvardhan, C.

    2006-01-01

    Quantum computing is a novel paradigm that promises to be the future of computing. The performance of quantum algorithms has proved to be stunning. ANN within the context of classical computation has been used for approximation and classification tasks with some success. This paper presents an idea of quantum neural networks along with the training algorithm and its convergence property. It synergizes the unique properties of quantum bits or qubits with the various techniques in vogue in neural networks. An example application of Fisher's Iris data set, a benchmark classification problem has also been presented. The results obtained amply demonstrate the classification capabilities of the quantum neuron and give an idea of their promising capabilities

  15. Evidence for a neural law of effect.

    Science.gov (United States)

    Athalye, Vivek R; Santos, Fernando J; Carmena, Jose M; Costa, Rui M

    2018-03-02

    Thorndike's law of effect states that actions that lead to reinforcements tend to be repeated more often. Accordingly, neural activity patterns leading to reinforcement are also reentered more frequently. Reinforcement relies on dopaminergic activity in the ventral tegmental area (VTA), and animals shape their behavior to receive dopaminergic stimulation. Seeking evidence for a neural law of effect, we found that mice learn to reenter more frequently motor cortical activity patterns that trigger optogenetic VTA self-stimulation. Learning was accompanied by gradual shaping of these patterns, with participating neurons progressively increasing and aligning their covariance to that of the target pattern. Motor cortex patterns that lead to phasic dopaminergic VTA activity are progressively reinforced and shaped, suggesting a mechanism by which animals select and shape actions to reliably achieve reinforcement. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  16. Crack identification by artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Hwu, C.B.; Liang, Y.C. [National Cheng Kung Univ., Tainan (Taiwan, Province of China). Inst. of Aeronaut. and Astronaut.

    1998-04-01

    In this paper, a most popular artificial neural network called the back propagation neural network (BPN) is employed to achieve an ideal on-line identification of the crack embedded in a composite plate. Different from the usual dynamic estimate, the parameters used for the present crack identification are the strains of static deformation. It is known that the crack effects are localized which may not be clearly reflected from the boundary information especially when the data is from static deformation only. To remedy this, we use data from multiple-loading modes in which the loading modes may include the opening, shearing and tearing modes. The results show that our method for crack identification is always stable and accurate no matter how far-away of the test data from its training set. (orig.) 8 refs.

  17. Time series prediction: statistical and neural techniques

    Science.gov (United States)

    Zahirniak, Daniel R.; DeSimio, Martin P.

    1996-03-01

    In this paper we compare the performance of nonlinear neural network techniques to those of linear filtering techniques in the prediction of time series. Specifically, we compare the results of using the nonlinear systems, known as multilayer perceptron and radial basis function neural networks, with the results obtained using the conventional linear Wiener filter, Kalman filter and Widrow-Hoff adaptive filter in predicting future values of stationary and non- stationary time series. Our results indicate the performance of each type of system is heavily dependent upon the form of the time series being predicted and the size of the system used. In particular, the linear filters perform adequately for linear or near linear processes while the nonlinear systems perform better for nonlinear processes. Since the linear systems take much less time to be developed, they should be tried prior to using the nonlinear systems when the linearity properties of the time series process are unknown.

  18. The neural bases for valuing social equality.

    Science.gov (United States)

    Aoki, Ryuta; Yomogida, Yukihito; Matsumoto, Kenji

    2015-01-01

    The neural basis of how humans value and pursue social equality has become a major topic in social neuroscience research. Although recent studies have identified a set of brain regions and possible mechanisms that are involved in the neural processing of equality of outcome between individuals, how the human brain processes equality of opportunity remains unknown. In this review article, first we describe the importance of the distinction between equality of outcome and equality of opportunity, which has been emphasized in philosophy and economics. Next, we discuss possible approaches for empirical characterization of human valuation of equality of opportunity vs. equality of outcome. Understanding how these two concepts are distinct and interact with each other may provide a better explanation of complex human behaviors concerning fairness and social equality. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  19. Self-organized critical neural networks

    International Nuclear Information System (INIS)

    Bornholdt, Stefan; Roehl, Torsten

    2003-01-01

    A mechanism for self-organization of the degree of connectivity in model neural networks is studied. Network connectivity is regulated locally on the basis of an order parameter of the global dynamics, which is estimated from an observable at the single synapse level. This principle is studied in a two-dimensional neural network with randomly wired asymmetric weights. In this class of networks, network connectivity is closely related to a phase transition between ordered and disordered dynamics. A slow topology change is imposed on the network through a local rewiring rule motivated by activity-dependent synaptic development: Neighbor neurons whose activity is correlated, on average develop a new connection while uncorrelated neighbors tend to disconnect. As a result, robust self-organization of the network towards the order disorder transition occurs. Convergence is independent of initial conditions, robust against thermal noise, and does not require fine tuning of parameters

  20. Neural networks: Application to medical imaging

    Science.gov (United States)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  1. Neural networks of human nature and nurture

    Directory of Open Access Journals (Sweden)

    Daniel S. Levine

    2009-11-01

    Full Text Available Neural network methods have facilitated the unification of several unfortunate splits in psychology, including nature versus nurture. We review the contributions of this methodology and then discuss tentative network theories of caring behavior, of uncaring behavior, and of how the frontal lobes are involved in the choices between them. The implications of our theory are optimistic about the prospects of society to encourage the human potential for caring.

  2. Model for neural signaling leap statistics

    International Nuclear Information System (INIS)

    Chevrollier, Martine; Oria, Marcos

    2011-01-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5 0 C, awaken regime) and Levy statistics (T = 35.5 0 C, sleeping period), characterized by rare events of long range connections.

  3. Neural mechanisms of hypnosis and meditation.

    Science.gov (United States)

    De Benedittis, Giuseppe

    2015-12-01

    Hypnosis has been an elusive concept for science for a long time. However, the explosive advances in neuroscience in the last few decades have provided a "bridge of understanding" between classical neurophysiological studies and psychophysiological studies. These studies have shed new light on the neural basis of the hypnotic experience. Furthermore, an ambitious new area of research is focusing on mapping the core processes of psychotherapy and the neurobiology/underlying them. Hypnosis research offers powerful techniques to isolate psychological processes in ways that allow their neural bases to be mapped. The Hypnotic Brain can serve as a way to tap neurocognitive questions and our cognitive assays can in turn shed new light on the neural bases of hypnosis. This cross-talk should enhance research and clinical applications. An increasing body of evidence provides insight in the neural mechanisms of the Meditative Brain. Discrete meditative styles are likely to target different neurodynamic patterns. Recent findings emphasize increased attentional resources activating the attentional and salience networks with coherent perception. Cognitive and emotional equanimity gives rise to an eudaimonic state, made of calm, resilience and stability, readiness to express compassion and empathy, a main goal of Buddhist practices. Structural changes in gray matter of key areas of the brain involved in learning processes suggest that these skills can be learned through practice. Hypnosis and Meditation represent two important, historical and influential landmarks of Western and Eastern civilization and culture respectively. Neuroscience has beginning to provide a better understanding of the mechanisms of both Hypnotic and Meditative Brain, outlining similarities but also differences between the two states and processes. It is important not to view either the Eastern or the Western system as superior to the other. Cross-fertilization of the ancient Eastern meditation techniques

  4. The Neural Correlates of Humor Creativity

    OpenAIRE

    Amir, Ori; Biederman, Irving

    2016-01-01

    Unlike passive humor appreciation, the neural correlates of real-time humor creation have been unexplored. As a case study for creativity, humor generation uniquely affords a reliable assessment of a creative product’s quality with a clear and relatively rapid beginning and end, rendering it amenable to neuroimaging that has the potential for reflecting individual differences in expertise. Professional and amateur “improv” comedians and controls viewed New Yorker cartoon drawings while being ...

  5. Character Recognition Using Genetically Trained Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the

  6. Neural-Network Object-Recognition Program

    Science.gov (United States)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  7. Modulated error diffusion CGHs for neural nets

    Science.gov (United States)

    Vermeulen, Pieter J. E.; Casasent, David P.

    1990-05-01

    New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).

  8. Neural correlates of continuous causal word generation.

    Science.gov (United States)

    Wende, Kim C; Straube, Benjamin; Stratmann, Mirjam; Sommer, Jens; Kircher, Tilo; Nagels, Arne

    2012-09-01

    Causality provides a natural structure for organizing our experience and language. Causal reasoning during speech production is a distinct aspect of verbal communication, whose related brain processes are yet unknown. The aim of the current study was to investigate the neural mechanisms underlying the continuous generation of cause-and-effect coherences during overt word production. During fMRI data acquisition participants performed three verbal fluency tasks on identical cue words: A novel causal verbal fluency task (CVF), requiring the production of multiple reasons to a given cue word (e.g. reasons for heat are fire, sun etc.), a semantic (free association, FA, e.g. associations with heat are sweat, shower etc.) and a phonological control task (phonological verbal fluency, PVF, e.g. rhymes with heat are meat, wheat etc.). We found that, in contrast to PVF, both CVF and FA activated a left lateralized network encompassing inferior frontal, inferior parietal and angular regions, with further bilateral activation in middle and inferior as well as superior temporal gyri and the cerebellum. For CVF contrasted against FA, we found greater bold responses only in the left middle frontal cortex. Large overlaps in the neural activations during free association and causal verbal fluency indicate that the access to causal relationships between verbal concepts is at least partly based on the semantic neural network. The selective activation in the left middle frontal cortex for causal verbal fluency suggests that distinct neural processes related to cause-and-effect-relations are associated with the recruitment of middle frontal brain areas. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Implicitly Defined Neural Networks for Sequence Labeling

    Science.gov (United States)

    2017-07-31

    ularity has soared for the Long Short - Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and vari- ants such as Gated Recurrent Unit (GRU) (Cho et...610. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short - term memory . Neural computation 9(8):1735– 1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015...network are coupled together, in order to improve perfor- mance on complex, long -range dependencies in either direction of a sequence. We contrast our

  10. Relation Classification via Recurrent Neural Network

    OpenAIRE

    Zhang, Dongxu; Wang, Dong

    2015-01-01

    Deep learning has gained much success in sentence-level relation classification. For example, convolutional neural networks (CNN) have delivered competitive performance without much effort on feature engineering as the conventional pattern-based methods. Thus a lot of works have been produced based on CNN structures. However, a key issue that has not been well addressed by the CNN-based method is the lack of capability to learn temporal features, especially long-distance dependency between no...

  11. From provocation to aggression: the neural network

    OpenAIRE

    Repple, J.; Pawliczek, C.M.; Voss, B.; Siegel, S.; Schneider, F.; Kohn, N.; Habel, U.

    2017-01-01

    Background In-vivo observations of neural processes during human aggressive behavior are difficult to obtain, limiting the number of studies in this area. To address this gap, the present study implemented a social reactive aggression paradigm in 29 healthy men, employing non-violent provocation in a two-player game to elicit aggressive behavior in fMRI settings. Results Participants responded more aggressively after high provocation reflected in taking more money from their opponents. Compar...

  12. Identifying Tracks Duplicates via Neural Network

    CERN Document Server

    Sunjerga, Antonio; CERN. Geneva. EP Department

    2017-01-01

    The goal of the project is to study feasibility of state of the art machine learning techniques in track reconstruction. Machine learning techniques provide promising ways to speed up the pattern recognition of tracks by adding more intelligence in the algorithms. Implementation of neural network to process of track duplicates identifying will be discussed. Different approaches are shown and results are compared to method that is currently in use.

  13. Adaptive Filtering Using Recurrent Neural Networks

    Science.gov (United States)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  14. Dynamics in a delayed-neural network

    International Nuclear Information System (INIS)

    Yuan Yuan

    2007-01-01

    In this paper, we consider a neural network of four identical neurons with time-delayed connections. Some parameter regions are given for global, local stability and synchronization using the theory of functional differential equations. The root distributions in the corresponding characteristic transcendental equation are analyzed, Pitchfork bifurcation, Hopf and equivariant Hopf bifurcations are investigated by revealing the center manifolds and normal forms. Numerical simulations are shown the agreements with the theoretical results

  15. Generating Seismograms with Deep Neural Networks

    Science.gov (United States)

    Krischer, L.; Fichtner, A.

    2017-12-01

    The recent surge of successful uses of deep neural networks in computer vision, speech recognition, and natural language processing, mainly enabled by the availability of fast GPUs and extremely large data sets, is starting to see many applications across all natural sciences. In seismology these are largely confined to classification and discrimination tasks. In this contribution we explore the use of deep neural networks for another class of problems: so called generative models.Generative modelling is a branch of statistics concerned with generating new observed data samples, usually by drawing from some underlying probability distribution. Samples with specific attributes can be generated by conditioning on input variables. In this work we condition on seismic source (mechanism and location) and receiver (location) parameters to generate multi-component seismograms.The deep neural networks are trained on synthetic data calculated with Instaseis (http://instaseis.net, van Driel et al. (2015)) and waveforms from the global ShakeMovie project (http://global.shakemovie.princeton.edu, Tromp et al. (2010)). The underlying radially symmetric or smoothly three dimensional Earth structures result in comparatively small waveform differences from similar events or at close receivers and the networks learn to interpolate between training data samples.Of particular importance is the chosen misfit functional. Generative adversarial networks (Goodfellow et al. (2014)) implement a system in which two networks compete: the generator network creates samples and the discriminator network distinguishes these from the true training examples. Both are trained in an adversarial fashion until the discriminator can no longer distinguish between generated and real samples. We show how this can be applied to seismograms and in particular how it compares to networks trained with more conventional misfit metrics. Last but not least we attempt to shed some light on the black-box nature of

  16. Permutation parity machines for neural cryptography.

    Science.gov (United States)

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-06-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  17. Flexible neural interfaces with integrated stiffening shank

    Energy Technology Data Exchange (ETDEWEB)

    Tooker, Angela C.; Felix, Sarah H.; Pannu, Satinderpall S.; Shah, Kedar G.; Sheth, Heeral; Tolosa, Vanessa

    2017-10-17

    A neural interface includes a first dielectric material having at least one first opening for a first electrical conducting material, a first electrical conducting material in the first opening, and at least one first interconnection trace electrical conducting material connected to the first electrical conducting material. A stiffening shank material is located adjacent the first dielectric material, the first electrical conducting material, and the first interconnection trace electrical conducting material.

  18. Flexible neural interfaces with integrated stiffening shank

    Science.gov (United States)

    Tooker, Angela C.; Felix, Sarah H.; Pannu, Satinderpall S.; Shah, Kedar G.; Sheth, Heeral; Tolosa, Vanessa

    2016-07-26

    A neural interface includes a first dielectric material having at least one first opening for a first electrical conducting material, a first electrical conducting material in the first opening, and at least one first interconnection trace electrical conducting material connected to the first electrical conducting material. A stiffening shank material is located adjacent the first dielectric material, the first electrical conducting material, and the first interconnection trace electrical conducting material.

  19. Structured Memory for Neural Turing Machines

    OpenAIRE

    Zhang, Wei; Yu, Yang; Zhou, Bowen

    2015-01-01

    Neural Turing Machines (NTM) contain memory component that simulates "working memory" in the brain to store and retrieve information to ease simple algorithms learning. So far, only linearly organized memory is proposed, and during experiments, we observed that the model does not always converge, and overfits easily when handling certain tasks. We think memory component is key to some faulty behaviors of NTM, and better organization of memory component could help fight those problems. In this...

  20. Learning in Neural Networks: VLSI Implementation Strategies

    Science.gov (United States)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  1. Artificial neural network cardiopulmonary modeling and diagnosis

    Science.gov (United States)

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  2. Characterization of Radar Signals Using Neural Networks

    Science.gov (United States)

    1990-12-01

    e***e*e*eeeeeeeeeeeesseeeeeese*eee*e*e************s /* Function Name: load.input.ptterns Number: 4.1 /* Description: This function determines wether ...XSE.last.layer Number: 8.5 */ /* Description: The function determines wether to backpropate the *f /* parameter by the sigmoidal or linear update...Sigmoidal Function," Mathematics of Control, Signals and Systems, 2:303-314 (March 1989). 6. Dayhoff, Judith E. Neural Network Architectures. New York: Van

  3. Model for neural signaling leap statistics

    Science.gov (United States)

    Chevrollier, Martine; Oriá, Marcos

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T = 37.5°C, awaken regime) and Lévy statistics (T = 35.5°C, sleeping period), characterized by rare events of long range connections.

  4. Model for neural signaling leap statistics

    Energy Technology Data Exchange (ETDEWEB)

    Chevrollier, Martine; Oria, Marcos, E-mail: oria@otica.ufpb.br [Laboratorio de Fisica Atomica e Lasers Departamento de Fisica, Universidade Federal da ParaIba Caixa Postal 5086 58051-900 Joao Pessoa, Paraiba (Brazil)

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5{sup 0}C, awaken regime) and Levy statistics (T = 35.5{sup 0}C, sleeping period), characterized by rare events of long range connections.

  5. Polarized DIS Structure Functions from Neural Networks

    International Nuclear Information System (INIS)

    Del Debbio, L.; Guffanti, A.; Piccione, A.

    2007-01-01

    We present a parametrization of polarized Deep-Inelastic-Scattering (DIS) structure functions based on Neural Networks. The parametrization provides a bias-free determination of the probability measure in the space of structure functions, which retains information on experimental errors and correlations. As an example we discuss the application of this method to the study of the structure function g 1 p (x,Q 2 )

  6. Permutation parity machines for neural cryptography

    International Nuclear Information System (INIS)

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-01-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  7. A short-term neural network memory

    Energy Technology Data Exchange (ETDEWEB)

    Morris, R.J.T.; Wong, W.S.

    1988-12-01

    Neural network memories with storage prescriptions based on Hebb's rule are known to collapse as more words are stored. By requiring that the most recently stored word be remembered precisely, a new simple short-term neutral network memory is obtained and its steady state capacity analyzed and simulated. Comparisons are drawn with Hopfield's method, the delta method of Widrow and Hoff, and the revised marginalist model of Mezard, Nadal, and Toulouse.

  8. Learning-parameter adjustment in neural networks

    Science.gov (United States)

    Heskes, Tom M.; Kappen, Bert

    1992-06-01

    We present a learning-parameter adjustment algorithm, valid for a large class of learning rules in neural-network literature. The algorithm follows directly from a consideration of the statistics of the weights in the network. The characteristic behavior of the algorithm is calculated, both in a fixed and a changing environment. A simple example, Widrow-Hoff learning for statistical classification, serves as an illustration.

  9. Applying neural networks to optimize instrumentation performance

    Energy Technology Data Exchange (ETDEWEB)

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  10. Applying neural networks to optimize instrumentation performance

    International Nuclear Information System (INIS)

    Start, S.E.; Peters, G.G.

    1995-01-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate

  11. Matrix regulators in neural stem cell functions.

    Science.gov (United States)

    Wade, Anna; McKinney, Andrew; Phillips, Joanna J

    2014-08-01

    Neural stem/progenitor cells (NSPCs) reside within a complex and dynamic extracellular microenvironment, or niche. This niche regulates fundamental aspects of their behavior during normal neural development and repair. Precise yet dynamic regulation of NSPC self-renewal, migration, and differentiation is critical and must persist over the life of an organism. In this review, we summarize some of the major components of the NSPC niche and provide examples of how cues from the extracellular matrix regulate NSPC behaviors. We use proteoglycans to illustrate the many diverse roles of the niche in providing temporal and spatial regulation of cellular behavior. The NSPC niche is comprised of multiple components that include; soluble ligands, such as growth factors, morphogens, chemokines, and neurotransmitters, the extracellular matrix, and cellular components. As illustrated by proteoglycans, a major component of the extracellular matrix, the NSPC, niche provides temporal and spatial regulation of NSPC behaviors. The factors that control NSPC behavior are vital to understand as we attempt to modulate normal neural development and repair. Furthermore, an improved understanding of how these factors regulate cell proliferation, migration, and differentiation, crucial for malignancy, may reveal novel anti-tumor strategies. This article is part of a Special Issue entitled Matrix-mediated cell behaviour and properties. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Analysis of complex systems using neural networks

    International Nuclear Information System (INIS)

    Uhrig, R.E.

    1992-01-01

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems

  13. Identifying Broadband Rotational Spectra with Neural Networks

    Science.gov (United States)

    Zaleski, Daniel P.; Prozument, Kirill

    2017-06-01

    A typical broadband rotational spectrum may contain several thousand observable transitions, spanning many species. Identifying the individual spectra, particularly when the dynamic range reaches 1,000:1 or even 10,000:1, can be challenging. One approach is to apply automated fitting routines. In this approach, combinations of 3 transitions can be created to form a "triple", which allows fitting of the A, B, and C rotational constants in a Watson-type Hamiltonian. On a standard desktop computer, with a target molecule of interest, a typical AUTOFIT routine takes 2-12 hours depending on the spectral density. A new approach is to utilize machine learning to train a computer to recognize the patterns (frequency spacing and relative intensities) inherit in rotational spectra and to identify the individual spectra in a raw broadband rotational spectrum. Here, recurrent neural networks have been trained to identify different types of rotational spectra and classify them accordingly. Furthermore, early results in applying convolutional neural networks for spectral object recognition in broadband rotational spectra appear promising. Perez et al. "Broadband Fourier transform rotational spectroscopy for structure determination: The water heptamer." Chem. Phys. Lett., 2013, 571, 1-15. Seifert et al. "AUTOFIT, an Automated Fitting Tool for Broadband Rotational Spectra, and Applications to 1-Hexanal." J. Mol. Spectrosc., 2015, 312, 13-21. Bishop. "Neural networks for pattern recognition." Oxford university press, 1995.

  14. The neural signatures of distinct psychopathic traits.

    Science.gov (United States)

    Carré, Justin M; Hyde, Luke W; Neumann, Craig S; Viding, Essi; Hariri, Ahmad R

    2013-01-01

    Recent studies suggest that psychopathy may be associated with dysfunction in the neural circuitry supporting both threat- and reward-related processes. However, these studies have involved small samples and often focused on extreme groups. Thus, it is unclear to what extent current findings may generalize to psychopathic traits in the general population. Furthermore, no studies have systematically and simultaneously assessed associations between distinct psychopathy facets and both threat- and reward-related brain function in the same sample of participants. Here, we examined the relationship between threat-related amygdala reactivity and reward-related ventral striatum (VS) reactivity and variation in four facets of self-reported psychopathy in a sample of 200 young adults. Path models indicated that amygdala reactivity to fearful facial expressions is negatively associated with the interpersonal facet of psychopathy, whereas amygdala reactivity to angry facial expressions is positively associated with the lifestyle facet. Furthermore, these models revealed that differential VS reactivity to positive versus negative feedback is negatively associated with the lifestyle facet. There was suggestive evidence for gender-specific patterns of association between brain function and psychopathy facets. Our findings are the first to document differential associations between both threat- and reward-related neural processes and distinct facets of psychopathy and thus provide a more comprehensive picture of the pattern of neural vulnerabilities that may predispose to maladaptive outcomes associated with psychopathy.

  15. Discrete Neural Signatures of Basic Emotions.

    Science.gov (United States)

    Saarimäki, Heini; Gotsopoulos, Athanasios; Jääskeläinen, Iiro P; Lampinen, Jouko; Vuilleumier, Patrik; Hari, Riitta; Sams, Mikko; Nummenmaa, Lauri

    2016-06-01

    Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Application of neural networks to group technology

    Science.gov (United States)

    Caudell, Thomas P.; Smith, Scott D. G.; Johnson, G. C.; Wunsch, Donald C., II

    1991-08-01

    Adaptive resonance theory (ART) neural networks are being developed for application to the industrial engineering problem of group technology--the reuse of engineering designs. Two- and three-dimensional representations of engineering designs are input to ART-1 neural networks to produce groups or families of similar parts. These representations, in their basic form, amount to bit maps of the part, and can become very large when the part is represented in high resolution. This paper describes an enhancement to an algorithmic form of ART-1 that allows it to operate directly on compressed input representations and to generate compressed memory templates. The performance of this compressed algorithm is compared to that of the regular algorithm on real engineering designs and a significant savings in memory storage as well as a speed up in execution is observed. In additions, a `neural database'' system under development is described. This system demonstrates the feasibility of training an ART-1 network to first cluster designs into families, and then to recall the family when presented a similar design. This application is of large practical value to industry, making it possible to avoid duplication of design efforts.

  17. Neck muscle biomechanics and neural control.

    Science.gov (United States)

    Fice, Jason Bradley; Siegmund, Gunter P; Blouin, Jean-Sebastien

    2018-04-18

    The mechanics, morphometry, and geometry of our joints, segments and muscles are fundamental biomechanical properties intrinsic to human neural control. The goal of our study was to investigate if the biomechanical actions of individual neck muscles predicts their neural control. Specifically, we compared the moment direction & variability produced by electrical stimulation of a neck muscle (biomechanics) to their preferred activation direction & variability (neural control). Subjects sat upright with their head fixed to a 6-axis load cell and their torso restrained. Indwelling wire electrodes were placed into the sternocleidomastoid (SCM), splenius capitis (SPL), and semispinalis capitis (SSC) muscles. The electrically stimulated direction was defined as the moment direction produced when a current (2-19mA) was passed through each muscle's electrodes. Preferred activation direction was defined as the vector sum of the spatial tuning curve built from RMS EMG when subjects produced isometric moments at 7.5% and 15% of their maximum voluntary contraction (MVC) in 26 3D directions. The spatial tuning curves at 15% MVC were well-defined (unimodal, pbiomechanics but, as activation increases, biomechanical constraints in part dictate the activation of synergistic neck muscles.

  18. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  19. Neural correlates of affective influence on choice.

    Science.gov (United States)

    Piech, Richard M; Lewis, Jade; Parkinson, Caroline H; Owen, Adrian M; Roberts, Angela C; Downing, Paul E; Parkinson, John A

    2010-03-01

    Making the right choice depends crucially on the accurate valuation of the available options in the light of current needs and goals of an individual. Thus, the valuation of identical options can vary considerably with motivational context. The present study investigated the neural structures underlying context dependent evaluation. We instructed participants to choose from food menu items based on different criteria: on their anticipated taste or on ease of preparation. The aim of the manipulation was to assess which neural sites were activated during choice guided by incentive value, and which during choice based on a value-irrelevant criterion. To assess the impact of increased motivation, affect-guided choice and cognition-guided choice was compared during the sated and hungry states. During affective choice, we identified increased activity in structures representing primarily valuation and taste (medial prefrontal cortex, insula). During cognitive choice, structures showing increased activity included those implicated in suppression and conflict monitoring (lateral orbitofrontal cortex, anterior cingulate). Hunger influenced choice-related activity in the ventrolateral prefrontal cortex. Our results show that choice is associated with the use of distinct neural structures for the pursuit of different goals. Published by Elsevier Inc.

  20. Neutron spectrum unfolding using neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2004-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using a large set of neutron spectra compiled by the International Atomic Energy Agency. These include spectra from iso- topic neutron sources, reference and operational neutron spectra obtained from accelerators and nuclear reactors. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and correspondent spectrum was used as output during neural network training. The network has 7 input nodes, 56 neurons as hidden layer and 31 neurons in the output layer. After training the network was tested with the Bonner spheres count rates produced by twelve neutron spectra. The network allows unfolding the neutron spectrum from count rates measured with Bonner spheres. Good results are obtained when testing count rates belong to neutron spectra used during training, acceptable results are obtained for count rates obtained from actual neutron fields; however the network fails when count rates belong to monoenergetic neutron sources. (Author)