WorldWideScience

Sample records for hardware computacional na

  1. Programa computacional para o dimensionamento de colhedoras considerando a pontualidade na colheita de soja

    OpenAIRE

    Borges,Iackson O.; Maciel,Antonio J. S.; Milan,Marcos

    2006-01-01

    A colheita de soja (Glycine max (L.) Merril) é uma operação crítica que pode sofrer atrasos resultando em perdas na quantidade e na qualidade do material colhido. Embora o valor das perdas seja desconhecido no País, os produtores empregam colhedoras com reserva de capacidade para concluir a operação no menor prazo possível. O excesso de capacidade aumenta os custos fixos e a falta dela aumenta os custos das perdas por atraso e, em ambos os casos, reduzem a renda líquida da operação, ao que se...

  2. Data Communication PC/NaI-borehole probe (Hardware & Software)

    DEFF Research Database (Denmark)

    Madsen, Peter Buch

    Development of new hard- & software to a NaI borehole probe on a PC. Save data from the probe each 10'th sec, handle the data from the probe and make calculations every 10'th sec and show the results on the monitor.......Development of new hard- & software to a NaI borehole probe on a PC. Save data from the probe each 10'th sec, handle the data from the probe and make calculations every 10'th sec and show the results on the monitor....

  3. Um sistema computacional de coleta de dados e avaliação institucional para apoio à tomada de decisão na Universidade Federal de Santa Catarina

    Directory of Open Access Journals (Sweden)

    José Marcos da Silva

    2018-01-01

    Full Text Available A avaliação institucional em Universidades Federais consiste de um trabalho constante e exaustivo de permanente reflexão do fazer universitário; é uma condição básica para identificar os desafios necessários à formulação de diretrizes para o Ensino, a Pesquisa, a Extensão e a Administração Universitária. Assim, um sistema computacional que proporcione facilidades para coleta de dados e se utilize de recursos on-line, que possibilite uma participação eficiente de todos os envolvidos no processo avaliativo, poderia ser de grande interesse da comunidade universitária. O trabalho tem o intuito de demonstrar um sistema computacional de coleta, pesquisa e avaliação institucional, denominado COLLECTA, para aplicação na Universidade Federal de Santa Catarina (UFSC. Esse sistema, utilizando novas tecnologias de informação e comunicação, visa a integrar alunos de graduação, alunos de pós-graduação, egressos, professores, técnico-administrativos e gestores, na busca de melhor qualidade institucional. Para o desenvolvimento da pesquisa, devido à sua natureza aplicada, trabalhou-se com uma abordagem qualitativa. Trata-se de uma pesquisa com enfoque descritivo, envolvendo levantamento bibliográfico e a resolução do problema por meio de pesquisa-ação. Com isso, foram identificadas as necessidades computacionais, sendo projetado, desenvolvido e implantado o sistema COLLECTA, com o intuito de auxiliar nos processos decisórios na UFSC com dados provenientes das avaliações institucionais, coletados pelo sistema computacional proposto, bem como tornar mais acessíveis e transparentes essas informações.

  4. Simulação computacional aplicada na análise do projeto de um restaurante universitário

    Directory of Open Access Journals (Sweden)

    José Victor Silvério

    2016-09-01

    Full Text Available O uso da simulação computacional foi impulsionado pelos avanços da computação e sua grande flexibilidade de utilização. Dentre as diversas aplicações, a análise de projetos ganha destaque. O presente trabalho tem como objetivo aplicar a simulação computacional para análise da capacidade de atendimento de um novo restaurante universitário. Para tanto, foram utilizados a abordagem de pesquisa quantitativa e o procedimento de pesquisa experimental, por meio do uso de simulação computacional. Mediante a análise dos resultados do modelo computacional desenvolvido, foi possível compreender o provável funcionamento do sistema estudado, bem como o fluxo das entidades entre os locais de atendimento. Verificou-se que o espaço físico projetado é adequado para o atendimento. No entanto, o processo de atendimento do local caixa deverá ser revisto, pois em horários de maior procura pelo serviço houve grande formação de fila de espera. Este trabalho apresenta contribuições para a área acadêmica e para a área empresarial.

  5. Programa computacional para o dimensionamento de colhedoras considerando a pontualidade na colheita de soja Computer model to select combine harvesters considering the timeliness of soybean

    Directory of Open Access Journals (Sweden)

    Iackson O. Borges

    2006-04-01

    Full Text Available A colheita de soja (Glycine max (L. Merril é uma operação crítica que pode sofrer atrasos resultando em perdas na quantidade e na qualidade do material colhido. Embora o valor das perdas seja desconhecido no País, os produtores empregam colhedoras com reserva de capacidade para concluir a operação no menor prazo possível. O excesso de capacidade aumenta os custos fixos e a falta dela aumenta os custos das perdas por atraso e, em ambos os casos, reduzem a renda líquida da operação, ao que se denomina custo de pontualidade. O problema do dimensionamento consiste em ajustar o custo do capital extra investido na capacidade da máquina para garantir a pontualidade e o custo das perdas por atraso, buscando maximizar a renda líquida. Devido à importância da cultura e da pontualidade, o objetivo deste trabalho foi avaliar a influência do atraso no dimensionamento da frota e no custo da operação de colheita. Para tanto, desenvolveu-se um modelo computacional em linguagem Borland® Delphi 5.0, em que a entrada de dados inclui os atributos da região agroclimática, da colhedora e das cultivares de soja. O resultado é a renda líquida como indicador da pontualidade na operação para a colhedora selecionada. O programa foi utilizado para simular cenários numa propriedade na região de Ponta Grossa - PR, e os valores obtidos revelaram que a frota de colhedoras da propriedade opera com capacidade ociosa, produzindo renda líquida abaixo do potencial.Soybean harvest is considered a critical operation that might suffer some delays causing a reduction in crop yield or quality. Although the value of losses is unknown in the country, combine harvesters with high capacity are applied by the producers and they aim to conclude the operation as fast as possible. If the harvesters have greater capacity than the required, the fixed cost is increased, and if it is the opposite, it might reduce the crop yield. In both cases, there is a decrease in

  6. Simplificações e Adaptações para Redução do Custo Computacional do Pré-processamento de Voz na Plataforma Arduino

    Directory of Open Access Journals (Sweden)

    Pedro Ítalo Ribeiro Albuquerque

    2016-06-01

    Full Text Available Atualmente, existe um crescente interesse por aplicações em que a interação homem-máquina seja realizada via a voz humana. No entanto, alguns equipamentos, como telefones celulares e eletrodomésticos, possuem limitações de armazenamento e processamento, dificultando a implementação deste tipo de sistema. Neste trabalho, foram implementadas simplificações matemáticas e estratégias de programação em duas etapas típicas de um sistema de reconhecimento da fala, a pré-ênfase e o janelamento. Oobjetivo desta implementação foi analisar o impacto das mesmas no desempenho e, consequentemente, no custo computacional as referidas etapas. Diante das adaptações efetuadas, o tempo de execução foi reduzido para 1/5 do tempo original da pré-ênfase e para 1/10 no caso do janelamento. 

  7. ANÁLISIS COMPUTACIONAL DEL EFECTO DE POLIMORFISMOS DE GENES DEL SISTEMA m-CALPAÍNA/CALPASTATINA SOBRE LA CALIDAD DE LA CARNE BOVINA

    Directory of Open Access Journals (Sweden)

    J. D. Leal-Gutiérrez

    2015-01-01

    Full Text Available Los genes del sistema de enzimas μ-Calpaína/Calpastatina han sido ampliamente evaluados en estudios de asociación respecto de parámetros de calidad cárnica como la terneza; previamente se han identificado varios polimorfismos asociados con la variación fenotípica en poblaciones no relacionadas de bovinos. Usando herramientas computacionales se logró postular la asociación de cuatro polimorfismos encontrados en μ-Calpaína y 11 en Calpastatina que producen una alteración de los parámetros físico-químicos, tanto del ARNm (estabilidad y polimorfismo conformacional, como de la proteína (punto isoeléctrico, potencial electroestático y superficie molecular. Es importante poder establecer el soporte biológico de polimorfismos genéticos asociados con parámetros fenotípicos que mejoren la productividad animal, lo que hace que la aproximación in silico se convierta en una herramienta útil para tal fin.

  8. Critérios de projeto e benefícios esperados da implantação de técnicas compensatórias em drenagem urbana para controle de escoamentos na fonte, com base em modelagem computacional aplicada a um estudo de caso na zona oeste do Rio de Janeiro

    Directory of Open Access Journals (Sweden)

    Anaí Floriano Vasconcelos

    Full Text Available RESUMO O processo de urbanização resulta em alterações no ciclo hidrológico prejudiciais à população. Para amenizar esses efeitos, as técnicas compensatórias em drenagem urbana visam à maior sustentabilidade hidrológica na expansão urbana. Nesse sentido, este artigo teve como objetivos avaliar, por meio de modelagem computacional, o efeito da adoção dessas técnicas na escala de lote e da bacia hidrográfica e avançar com diferentes possibilidades de concepção de projeto. A modelagem foi realizada para diversos cenários, considerando a implantação das técnicas de forma isolada e combinada. Os parâmetros utilizados na modelagem visaram verificar possibilidades extremas de aplicação, de modo a disponibilizar dados para balizamento de projetos reais. As chuvas avaliadas possuem variadas durações e intensidades, facilitando a extrapolação dos resultados deste trabalho para bacias hidrográficas de diferentes escalas. Os resultados das simulações indicam potenciais benefícios na drenagem urbana oriundos do uso dessas técnicas no lote, com mais efetividade para as menores chuvas de projeto. Esse foi o caso dos cenários que avaliaram os jardins rebaixados isoladamente ou combinados em série com um reservatório de lote e os cenários de pavimentos permeáveis na calçada que receberiam o escoamento superficial do lote adjacente. Também foi verificado que a combinação paralela de um reservatório de lote com as dimensões propostas pela legislação municipal e um jardim rebaixado em 0,07 m seria capaz de neutralizar, hidrologicamente, os impactos da ocupação do lote para todas as chuvas de projeto analisadas, entretanto o reservatório proposto pela legislação municipal, quando adotado isoladamente, quase não atuou na escala da bacia..

  9. Pensamiento computacional: rompiendo brechas digitales y educativas

    Directory of Open Access Journals (Sweden)

    Mauricio Javier Rico

    2018-03-01

    Full Text Available Este artículo describe una iniciativa pragmática de colaboración internacional en el ámbito de la formación del Pensamiento Computacional de los jóvenes estudiantes de Colombia. El proyecto “Introducción del Pensamiento Computacional en las escuelas de Bogotá y Colombia” (RENATA/EHU involucra el pensamiento computacional en el currículo escolar de una manera asequible y eficaz para los estudiantes, los docentes y los centros educativos. Las nuevas generaciones de este país tienen ahora la posibilidad de adquirir habilidades del siglo XXI al igual que las nuevas generaciones de otros países donde la computación es parte del currículo educativo desde los primeros años escolares. Este proyecto está en su fase de implementación en escuelas de diferentes regiones de Colombia; puede ser un ejemplo de cómo romper brechas digitales y educativas utilizando las TIC y la educación como principales herramientas de transformación social.

  10. Interação Humano - Computador usando Visão Computacional

    Directory of Open Access Journals (Sweden)

    Bernardo Bucher B. Barbosa

    2015-07-01

    Full Text Available Este trabalho visa estudar maneiras de se explorar a Interação Humano Computador, usando Visão Computacional. A idéia tem como objetivo um esforço para tornar o computador mais interativo com o usuário, sem a necessidade da compra de um hardware ou acessório específico para tal. O produto final deste trabalho em desenvolvimento é um software que contempla esta funcionalidade, tornando o computador mais interativo.

  11. SIMULAÇÃO COMPUTACIONAL PARA OTIMIZAÇÃO DE FILAS EM PROCESSOS

    OpenAIRE

    Botassoli, Guilherme Tonini; UNISC - Universidade de Santa Cruz do Sul; Alberti, Rafael Alvise; UNISC - Universidade de Santa Cruz do Sul; Furtado, João Carlos; UNISC - Universidade de Santa Cruz do Sul

    2015-01-01

    A utilização de técnicas de otimização em simulação impactam fortemente em diferentes áreas e por isso, acabam por se tornar ferramentas fundamentais na engenharia de processos. Assim, foi desenvolvido um algoritmo para otimização em simulação computacional, em linguagem C a partir do programa Code::Blocks, utilizando-se de conceitos provenientes do Método Enxame de Partículas (MEP) e Algoritmos Genéticos (AG). Partindo inicialmente de um algoritmo base com matriz de números aleatórios, busco...

  12. ECONOMÍA COMPUTACIONAL BASADA EN AGENTES

    Directory of Open Access Journals (Sweden)

    FABIÁN ANDRÉS GIRALDO GIRALDO

    2012-12-01

    Full Text Available El artículo tiene como objetivo mostrarvarios trabajos de investigación sobre un enfoque desimulación denominado Economía computacional basadaen agentes, el cual rechaza las asunciones delos enfoques de estudio tradicionales que indican quela economía es un sistema cerrado que eventualmentelogra un estado de equilibrio, en el que deben realizarsesupuestos de racionalidad perfecta e inversioneshomogéneas para que los modelos sean tratadosanalíticamente. En su lugar, ve a la economía comoun sistema complejo, adaptativo y dinámico. Estenuevo enfoque permite usar la simulación basada enagentes para comprender que varios agentes económicos(firmas, grupos económicos con sus propiasreglas y objetivos, son capaces de interactuar entresí y con su entorno para obtener comportamientosemergentes que no son explicables directamente delas propiedades de los agentes individuales.

  13. Hardware Development and Locomotion Control Strategy for an Over-Ground Gait Trainer: NaTUre-Gaits.

    Science.gov (United States)

    Luu, Trieu Phat; Low, Kin Huat; Qu, Xingda; Lim, Hup Boon; Hoon, Kay Hiang

    2014-01-01

    Therapist-assisted body weight supported (TABWS) gait rehabilitation was introduced two decades ago. The benefit of TABWS in functional recovery of walking in spinal cord injury and stroke patients has been demonstrated and reported. However, shortage of therapists, labor-intensiveness, and short duration of training are some limitations of this approach. To overcome these deficiencies, robotic-assisted gait rehabilitation systems have been suggested. These systems have gained attentions from researchers and clinical practitioner in recent years. To achieve the same objective, an over-ground gait rehabilitation system, NaTUre-gaits, was developed at the Nanyang Technological University. The design was based on a clinical approach to provide four main features, which are pelvic motion, body weight support, over-ground walking experience, and lower limb assistance. These features can be achieved by three main modules of NaTUre-gaits: 1) pelvic assistance mechanism, mobile platform, and robotic orthosis. Predefined gait patterns are required for a robotic assisted system to follow. In this paper, the gait pattern planning for NaTUre-gaits was accomplished by an individual-specific gait pattern prediction model. The model generates gait patterns that resemble natural gait patterns of the targeted subjects. The features of NaTUre-gaits have been demonstrated by walking trials with several subjects. The trials have been evaluated by therapists and doctors. The results show that 10-m walking trial with a reduction in manpower. The task-specific repetitive training approach and natural walking gait patterns were also successfully achieved.

  14. Simulação computacional integrada para a consideração da luz natural na avaliação do desempenho energético de edificações

    Directory of Open Access Journals (Sweden)

    Evelise Leite Didoné

    Full Text Available A luz natural é uma importante estratégia para redução do consumo de energia em edificações. Para sua previsão, recomenda-se a utilização de programas de simulação que empregam arquivos climáticos e processam as simulações com o conceito do Daylight Coefficients. O EnergyPlus, programa para simulação termoenergética, é uma dessas ferramentas. Porém, este software possui limitações no módulo de iluminação natural que superestimam a luz natural em ambientes internos. Para contornar essas limitações, o presente trabalho propõe uma metodologia para avaliação da eficiência energética, considerando o aproveitamento da luz natural, através da utilização de dois programas. A metodologia consiste na avaliação do desempenho luminoso e energético através de simulação com os programas Daysim e EnergyPlus. O Daysim produz um relatório que descreve o controle da iluminação artificial que é utilizado na simulação energética do EnergyPlus, que calcula o consumo energético final dos ambientes analisados. Os resultados indicam que a metodologia proposta mostrou-se adequada para suprir as limitações do EnergyPlus e para a avaliação da eficiência energética em edificações, considerando o aproveitamento da luz natural. Este trabalho mostra um caminho alternativo e confiável para a consideração do aproveitamento da iluminação natural na avaliação da eficiência energética de edificações.

  15. Hacia un tratamiento computacional del Aktionsart

    Directory of Open Access Journals (Sweden)

    Juan Aparicio

    2013-12-01

    Full Text Available En el área del Procesamiento del Lenguaje Natural (PLN, a la hora de crear aplicaciones inteligentes, el tratamiento semántico es fundamental. Sin embargo, la investigación que actualmente se está llevando a cabo en PLN está todavía lejos de conseguir niveles profundos de compresión del lenguaje. El objetivo principal de nuestra investigación es la representación del Aktionsart (la manera como se construye el evento expresado por un verbo en su desarrollo temporal. Una de las dificultades básicas que presenta el tratamiento semántico del lenguaje es el establecimiento de clases, debido principalmente a la naturaleza gradual del significado y la alta incidencia del contexto en la interpretación de las diferentes unidades. En este artículo nos centraremos en la presentación de las clases aspectuales léxicas de nuestra propuesta. El total de clases definidas se clasifica en dos grupos, las clases simples: estados, procesos y puntos, cuya combinación da lugar a las clases complejas: culminaciones, realizaciones y graduales. Esta presentación se llevará a cabo tanto desde el punto de vista teórico, como de su implementación computacional.

  16. Hardware malware

    CERN Document Server

    Krieg, Christian

    2013-01-01

    In our digital world, integrated circuits are present in nearly every moment of our daily life. Even when using the coffee machine in the morning, or driving our car to work, we interact with integrated circuits. The increasing spread of information technology in virtually all areas of life in the industrialized world offers a broad range of attack vectors. So far, mainly software-based attacks have been considered and investigated, while hardware-based attacks have attracted comparatively little interest. The design and production process of integrated circuits is mostly decentralized due to

  17. Programa computacional para calcular a potência requerida de máquinas e implementos agrícolas

    OpenAIRE

    Pablo Pereira Corrêa Klaver; Ricardo Ferreira Garcia; José Francisco Sá Vasconcelos Júnior; Delorme Corrêa Junior; Wellington Gonzaga Vale

    2013-01-01

    O uso de programas computacionais no setor agrícola permite atingir objetivos específicos na área. Dentre esses, um dos mais complexos é a seleção adequada de máquinas e implementos agrícolas visando à otimização de operações agrícolas, devido, principalmente, à grande variedade de equipamentos existentes no mercado e a gama de tarefas e situações de trabalho que estas são submetidas no campo. O objetivo deste trabalho foi desenvolver um programa computacional para calcular a potência requeri...

  18. Programa computacional para calcular a potência requerida de máquinas e implementos agrícolas

    Directory of Open Access Journals (Sweden)

    Pablo Pereira Corrêa Klaver

    2013-12-01

    Full Text Available O uso de programas computacionais no setor agrícola permite atingir objetivos específicos na área. Dentre esses, um dos mais complexos é a seleção adequada de máquinas e implementos agrícolas visando à otimização de operações agrícolas, devido, principalmente, à grande variedade de equipamentos existentes no mercado e a gama de tarefas e situações de trabalho que estas são submetidas no campo. O objetivo deste trabalho foi desenvolver um programa computacional para calcular a potência requerida de máquinas e implementos agrícolas normalmente utilizados na condução de operações de campo, desde o preparo do solo até as operações de implantação de culturas. Desenvolvido em linguagem PHP, o programa computacional baseia-se na norma ASAE D497.4 - Agricultural Machinery Management Data como referência para desenvolvimento de cálculos. A partir do programa desenvolvido, tornou-se possível a execução de tarefas para cálculos de avaliação da demanda de potência de máquinas e implementos agrícolas de forma simplificada pela internet.

  19. Hacia un modelo computacional unificado del lenguaje natural

    Directory of Open Access Journals (Sweden)

    Benjamín Ramírez González

    2013-12-01

    Full Text Available ¿Qué tipo de formalismo debe utilizarse para representar el lenguaje natural? Es necesario un formalismo capaz de describir adecuadamente todas las secuencias de las lenguas naturales. Pero, además, en la medida de lo posible, debe ser un formalismo sencillo, de un coste computacional reducido. Esta pregunta ha generado mucha controversia entre las principales escuelas generativas: la Gramática Transformacional y las Gramáticas de Unificación. En este artículo se defiende que, pese a las diferencias existentes, en última instancia, tales escuelas formalizan el lenguaje humano mediante un mismo tipo de formalismo bien definido: lo que Noam Chomsky llamó lenguaje independiente del contexto. Bajo el prisma de este artículo, la Lingüística actual está en condiciones de ofrecer un modelo computacional unificado del lenguaje natural.

  20. Aprendizagens em movimento: Um experimento de estímulo ao Pensamento Computacional de docentes com M-Learning e U-Learning

    Directory of Open Access Journals (Sweden)

    Guaraci Vargas Greff

    2018-03-01

    Full Text Available Este artigo relata a experiência de prática pedagógica de pensamento computacional em sala de aula. Faz uso das aprendizagens móvel e ubíqua  na prática do Pensamento Computacional na construção de aplicativo para dispositivo móvel ou portátil através da ferramenta denominada App Inventor disponível no site do Instituto de Tecnologia de Massassuchets. A base teórica usada é as metodologias de inquérito incluindo: aprendizagem por descoberta, inquérito indutivo, instrução ancorada, estudo de caso e Aprendizagem Baseada em Problemas ou Projetos (ABP. Esta é uma pesquisa qualitativa de estudo de caso colaborativo que analisa e aplica a proposta de curso disponível no site, realiza análise do perfil dos entrevistados e apresenta opiniões sobre a atividade. Como resultados surgem aspectos relevantes do perfil docente para aprendizagem móvel e ubíqua, assim como orientações para a prática docente do Pensamento Computacional com as aprendizagens móvel e ubíqua em sala de aula, além da aceitação dos estudantes e docentes no uso de dispositivos móveis para aprendizagem de lógica de programação.

  1. JOGO COMPUTACIONAL PARA APOIO A PESSOAS COM PARALISIA CEREBRAL

    Directory of Open Access Journals (Sweden)

    Jorlei Luis Baierle

    2012-06-01

    Full Text Available Este artigo apresenta um jogo computacional educacional desenvolvido para atuar como uma nova tática de ensino em um ambiente virtual de aprendizagem com agentes pedagógicos emocionais, que executam diferentes funções. O jogo computacional tem como objetivo desenvolver o raciocínio lógico oferecendo desafios aos estudantes em um ambiente 3D, incluindo personagens tais como macacos, abelhas, jacaré, morcegos e gorila no trajeto de um trem que carrega mantimentos. O cenário do jogo possui elementos diversificados, tais como pontes, túnel, lago, floresta, montanhas e aldeia de índios. Este projeto visa contribuir no processo de ensino-aprendizagem, oferecendo um ambiente dinâmico e de interação com os estudantes, respeitando e adaptando-se às suas características de aprendizagem. Pretende-se adaptar o jogo desenvolvido para trabalhar com pessoas com deficiência física motora, mas com capacidade de aprendizagem, a fim de auxiliar com a sua inclusão em nossa sociedade.

  2. UMA IMPLEMENTAÇÃO COMPUTACIONAL DE CONSTRUÇÕES VERBAIS PERIFRÁSTICAS EM FRANCÊS

    Directory of Open Access Journals (Sweden)

    Leonel Figueiredo de Alencar

    Full Text Available RESUMO: Este artigo descreve o tratamento da passiva e do passado composto na FrGramm, uma gramática computacional do francês implementada na Gramática Léxico-Funcional (LFG usando o software XLE. Devido à dualidade de auxiliares e concordância do particípio passado (PTPST, a segunda perífrase exibe uma maior complexidade estrutural em francês do que em línguas como inglês e português, representando, consequentemente, um maior desafio à implementação computacional. Uma dificuldade adicional é a modelação das regularidades morfológicas e sintático-semânticas da passiva. A FrGramm resolve esse problema por meio de uma regra lexical produtiva. Também implementa as restrições que governam a formação das duas perífrases verbais, exceto a concordância do PTPST com o objeto direto. A im ple men ta ção foi avaliada pela aplicação de um analisador sintático automático (parser a 157 sentenças gra ma ti cais e 279 construções a gramati cais. Todas as sentenças do primeiro con junto foram ana li sa das corretamente. Apenas duas construções do segundo que violam a pre cedência do auxiliar do passado composto so bre o da passiva foram analisadas como gra ma ticais. A FrGramm é a úni ca gramática LFG do fran cês com essa cobertura atualmente dis po nibilizada livremente. Uma versão futura dará con ta da concordância do PTPST com o objeto direto e evitará a hipergeração referida.

  3. Introduction to Hardware Security

    Directory of Open Access Journals (Sweden)

    Yier Jin

    2015-10-01

    Full Text Available Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain and to help both academia and industry investigate countermeasures and solutions to solve hardware security problems, we will introduce the key concepts of hardware security as well as its relations to related research topics in this survey paper. Emerging hardware security topics will also be clearly depicted through which the future trend will be elaborated, making this survey paper a good reference for the continuing research efforts in this area.

  4. Teoría Computacional de la Mente.

    Directory of Open Access Journals (Sweden)

    Mario Camacho Pinto

    2000-12-01

    Full Text Available

    Cuando publiqué mi trabajo titulado “Inteligencia Artificial y Neurología” en la Revista MEDICINA en cuatro ejemplares Nos. 14, 15, 16, 17, Años 1986-1987, expuse exhaustivamente la contribución científica contemporánea de las Neuro-ciencias y experimenté la ingenua ilusión (frecuente ocurrencia en el ser humano de participar, así fuese muy lejanamente, en el optimismo de los científicos japoneses quienes ofrecían obtener en el decurso de pocos años inteligencia artificial equiparable con la inteligencia humana. Infortunadamente no sucedió así, no podía ser así, lo que nunca quiere decir que de mi parte se ignoren o se desconozcan los sensacionales logros que esos mismos científicos han venido consiguiendo.

    Ahora me encuentro entusiasmado en el desarrollo de otro tema de características similares, lo cual me induce a comentarlo así sea someramente en su trayectoria.

    Se trata de la teoría computacional de la mente, interesantísimo rubro expuesto prolijamente por Steven Pinker del Instituto Tecnológico de Massachussets en el libro del cual es autor, “best seller” de 660 páginas titulado “How the Mind Works”, publicado el año pasado con 800 referencias de bibliografía.

    Su cuidadosa lectura con mis múltiples fieles transcripciones a más de mi información personal mediante Internet, constituyen el bagage intelectual y el soporte científico que me han impulsado a escribir este somero comentario sobre tan trascendente tema, con destino a MEDICINA.

    La Teoría Computacional de la Mente tiene su origen en las ideas geniales del matemático norteamericano Alan Turing quien demostró que una máquina binaria podía ser programada para realizar cualquier tarea algorítmica, lo cual fue complementado en el mismo año de 1937 por Claude Shannon del MIT con la noción de integración de circuitos en los rieles eléctricos que integran el sistema binario de almacenamiento de informaci

  5. NEUROCIÊNCIA COGNITIVA COMO BASE PARA ANÁLISE DO PROCESSO DO PENSAMENTO COMPUTACIONAL, ATRAVÉS DA PROGRAMAÇÃO

    Directory of Open Access Journals (Sweden)

    Lucas Tadeu Hinterholz

    2014-09-01

    Full Text Available Os estudos no campo da Neurociência Cognitiva estão evoluindo e apontando-a como essencial para o entendimento da construção do conhecimento. Esta pesquisa faz parte da proposta do projeto Contribuição da Tecnologia Computacional para a Assistência Social e apoio do projeto de extensão Unisc - Inclusão Digital, desenvolvidos na Universidade de Santa Cruz do Sul e executados por docentes e estudantes do curso de Licenciatura em Computação. Como base teórica foram estudados conceitos da Neurociência Cognitiva e Abstração Reflexionante para a compreensão dos processos mentais e acompanhamento de resultados de aprendizagem do estudante, enquanto utiliza computador para programar em uma linguagem específica. A Metodologia contou com um Estudo de Caso, desenvolvido através de oficina de ensino de programação para adolescentes entre 14 e 17 anos, os quais não possuíam qualquer conhecimento prévio. Estes responderam a um teste do Método Clínico Piagetiano – MCP relacionado à memória implícita e explícita, adaptado para aplicação durante a utilização de linguagem de programação. Como resultado, observou-se que 83,34% dos participantes correspondeu à expectativa e apresentou características de abstração e consciência compatíveis com o nível cognitivo reflexivo. Neste sentido, afirma-se que a programação computacional é incentivo à cognição humana, contribui para o desenvolvimento do Pensamento Computacional, devendo ser inserida no ensino formal junto aos currículos escolares.

  6. Open Hardware Business Models

    Directory of Open Access Journals (Sweden)

    Edy Ferreira

    2008-04-01

    Full Text Available In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  7. Open Hardware Business Models

    OpenAIRE

    Edy Ferreira

    2008-01-01

    In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  8. CRIAÇÃO DE UMA FERRAMENTA COMPUTACIONAL PARA CONTROLE DE ATIVIDADES DE CURTO PRAZO

    Directory of Open Access Journals (Sweden)

    Michele Tereza Marques Carvalho

    2016-12-01

    Full Text Available Tem-se observado que o crescimento da competitividade no setor da construção civil tem levado as empresas a procurarem por melhorias de desempenho relacionados aos resultados obtidos nas frentes de trabalho. Isso acontece por meio de programas de melhoria da qualidade e produtividade, onde pode-se citar a capacitação da mão de obra, melhoria das técnicas de produção e aperfeiçoamento da linguagem utilizada entre todas as partes envolvidas na realização das tarefas. Desta forma, o foco deste trabalho é o desenvolvimento de uma ferramenta computacional que integre todos os indicadores de planejamento e controle, gráficos, diários de obras e programações  necessárias para permitir uma observação completa da obra e uma análise do planejamento realizado que aponte os seus pontos fortes e fracos, facilitando o processo de tomada de decisão. Este instrumento mostrará aonde o planejamento e controle estão falhando e assim será possível melhorar as diretrizes mais rapidamente, dado que estas falhas demoram a ser identificadas tanto pelos planejadores quanto pelos responsáveis pelas frentes de trabalho. Com os dados obtidos em mãos será realizada uma análise sobre os resultados alcançados com o desenvolvimento da ferramenta, a veracidade dos dados utilizados e a necessidade de seu desenvolvimento e emprego em outras obras.

  9. Desenvolvimento de um programa computacional didático para o projeto de colunas de absorção

    Directory of Open Access Journals (Sweden)

    Nehemias Curvelo Pereira

    1998-05-01

    Full Text Available A absorção de gases é uma operação unitária da engenharia química na qual há a transferência de massa de uma corrente gasosa para uma corrente líquida. O princípio desta operação é simples e as dificuldades no dimensionamento das colunas de absorção estão associadas à execução de um conjunto de balanços de massa e de energia nas condições de operação do sistema. Elaborou-se, usando a linguagem de programação DELPHI, voltada ao ambiente WINDOWS, e algumas equações encontradas na literatura, um programa computacional de caráter didático para o projeto de torres de absorção isotérmicas, com diversos tipos de recheio e com operação em contracorrente. Os resultados são apresentados tanto em forma numérica como de gráficos, sendo possível sua impressão. Alguns exemplos tirados da literatura foram usados para testar a eficácia do software e observou-se que os erros apresentados pelos resultados do programa variam em torno de 5%

  10. Caracterización computacional de los epitopes B de la quitinasa clase I de la Ananas comosus (Piña

    Directory of Open Access Journals (Sweden)

    César Muñoz Mejía

    2011-01-01

    Full Text Available Objetivos: Determinar el potencial alergénico de la quitinasa de piña y proponer un modelo computacional de la estructura de esta proteína para la predicción de posibles sitios de unión a la IgE, a los Epitopes E, los que se encuentran implicados en las reacciones alérgicas de esta fruta. Métodos: A partir de una secuencia de bases de ADN de piña, previamente reportada, que traduce para una proteína con homología a diferentes quitinasas de otras frutas, y mediante el uso de herramientas bioinformáticas y bases de datos disponibles en la red, se obtuvo un modelo computacional de quitinasa de piña y se analizaron su estructura y características fisicoquímicas para la predicción de epítopes dentro de la misma. Resultados: Se generó un modelo computacional de una proteína de 204 aminoácidos, que pertenece al grupo de las quitinasas I. La predicción y posterior análisis de Epitopes obtenidos a partir de varios servidores bioinformáticos mostró que estos tienen características (Área de Superficie Relativa, RSA que los hacen aptos para pertenecer a un sitio de unión a IgE. Conclusiones: La quitinasa de piña estudiada posee homología con uno de los grupos de alérgenos de alimentos que está implicado en el síndrome látexfruta, y podría ser la responsable de reacciones alérgicas a este alimento. Por otro lado, poder predecir estos epítopes es de utilidad también en el diseño de alimentos transgénicos.

  11. Open Hardware at CERN

    CERN Multimedia

    CERN Knowledge Transfer Group

    2015-01-01

    CERN is actively making its knowledge and technology available for the benefit of society and does so through a variety of different mechanisms. Open hardware has in recent years established itself as a very effective way for CERN to make electronics designs and in particular printed circuit board layouts, accessible to anyone, while also facilitating collaboration and design re-use. It is creating an impact on many levels, from companies producing and selling products based on hardware designed at CERN, to new projects being released under the CERN Open Hardware Licence. Today the open hardware community includes large research institutes, universities, individual enthusiasts and companies. Many of the companies are actively involved in the entire process from design to production, delivering services and consultancy and even making their own products available under open licences.

  12. Hardware description languages

    Science.gov (United States)

    Tucker, Jerry H.

    1994-01-01

    Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.

  13. Hardware protection through obfuscation

    CERN Document Server

    Bhunia, Swarup; Tehranipoor, Mark

    2017-01-01

    This book introduces readers to various threats faced during design and fabrication by today’s integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or “IC Overproduction,” insertion of malicious circuits, referred as “Hardware Trojans”, which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange o...

  14. ZEUS hardware control system

    Science.gov (United States)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-12-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users.

  15. ZEUS hardware control system

    International Nuclear Information System (INIS)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-01-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users. (orig.)

  16. Hardware Objects for Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Thalinger, Christian; Korsholm, Stephan

    2008-01-01

    Java, as a safe and platform independent language, avoids access to low-level I/O devices or direct memory access. In standard Java, low-level I/O it not a concern; it is handled by the operating system. However, in the embedded domain resources are scarce and a Java virtual machine (JVM) without...... an underlying middleware is an attractive architecture. When running the JVM on bare metal, we need access to I/O devices from Java; therefore we investigate a safe and efficient mechanism to represent I/O devices as first class Java objects, where device registers are represented by object fields. Access...... to those registers is safe as Java’s type system regulates it. The access is also fast as it is directly performed by the bytecodes getfield and putfield. Hardware objects thus provide an object-oriented abstraction of low-level hardware devices. As a proof of concept, we have implemented hardware objects...

  17. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  18. The VMTG Hardware Description

    CERN Document Server

    Puccio, B

    1998-01-01

    The document describes the hardware features of the CERN Master Timing Generator. This board is the common platform for the transmission of General Timing Machine required by the CERN accelerators. In addition, the paper shows the various jumper options to customise the card which is compliant to the VMEbus standard.

  19. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  20. CERN Neutrino Platform Hardware

    CERN Document Server

    Nelson, Kevin

    2017-01-01

    My summer research was broadly in CERN's neutrino platform hardware efforts. This project had two main components: detector assembly and data analysis work for ICARUS. Specifically, I worked on assembly for the ProtoDUNE project and monitored the safety of ICARUS as it was transported to Fermilab by analyzing the accelerometer data from its move.

  1. RRFC hardware operation manual

    International Nuclear Information System (INIS)

    Abhold, M.E.; Hsue, S.T.; Menlove, H.O.; Walton, G.

    1996-05-01

    The Research Reactor Fuel Counter (RRFC) system was developed to assay the 235 U content in spent Material Test Reactor (MTR) type fuel elements underwater in a spent fuel pool. RRFC assays the 235 U content using active neutron coincidence counting and also incorporates an ion chamber for gross gamma-ray measurements. This manual describes RRFC hardware, including detectors, electronics, and performance characteristics

  2. Modelo Computacional Baseado em Servidor: Estudo de Caso Utilizando Thin Clients

    OpenAIRE

    Moacir Luiz Barnabé; Rita de Cassia Rocha; Reginaldo Castro de Souza; Carlos Eduardo Costa Vieira

    2015-01-01

    Este artigo apresenta a solução Thin Client como alternativa econômica e segura, ao modelo computacional massivamente utilizado atualmente no meio empresarial. Paradoxalmente é concedido inicialmente aos clientes equipamentos e recursos tecnológicos, por meio de computadores, para, posteriormente, em uma ação permanente, controlar ou limitar estes recursos de forma a garantir a segurança, disponibilidade operacional e evitar seu uso recreativo.

  3. Métodos de Simulación Computacional en Biología

    Directory of Open Access Journals (Sweden)

    Amir Dario Maldonado Arce

    2016-12-01

    Full Text Available Las técnicas de simulación computacional se usan extensivamente para estudiar sistemas biológicos, y en general, materiales sólidos y blandos. Debido a la complejidad de los fenómenos biológicos, y a la imposibilidad de estudiar teóricamente el comportamiento de sistemas tales como proteínas y membranas, la simulación computacional se utiliza para estudiar la estructura y dinámica de estos sistemas en diferentes escalas temporales. En este artículo describiremos brevemente algunas de las técnicas de simulación computacional más utilizadas en Biología: la Dinámica Molecular, la Dinámica Browniana y el Método de Monte Carlo. Nuestra intención es proporcionar un panorama introductorio de la utilidad de los métodos de simulación molecular en Biología

  4. Hardware Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists

  5. Sterilization of space hardware.

    Science.gov (United States)

    Pflug, I. J.

    1971-01-01

    Discussion of various techniques of sterilization of space flight hardware using either destructive heating or the action of chemicals. Factors considered in the dry-heat destruction of microorganisms include the effects of microbial water content, temperature, the physicochemical properties of the microorganism and adjacent support, and nature of the surrounding gas atmosphere. Dry-heat destruction rates of microorganisms on the surface, between mated surface areas, or buried in the solid material of space vehicle hardware are reviewed, along with alternative dry-heat sterilization cycles, thermodynamic considerations, and considerations of final sterilization-process design. Discussed sterilization chemicals include ethylene oxide, formaldehyde, methyl bromide, dimethyl sulfoxide, peracetic acid, and beta-propiolactone.

  6. Hardware characteristic and application

    International Nuclear Information System (INIS)

    Gu, Dong Hyeon

    1990-03-01

    The contents of this book are system board on memory, performance, system timer system click and specification, coprocessor such as programing interface and hardware interface, power supply on input and output, protection for DC output, Power Good signal, explanation on 84 keyboard and 101/102 keyboard,BIOS system, 80286 instruction set and 80287 coprocessor, characters, keystrokes and colors, communication and compatibility of IBM personal computer on application direction, multitasking and code for distinction of system.

  7. Programa GENES: Aplicativo Computacional em Estatística Aplicada à Genética (GENES - Software for Experimental Statistics in Genetics

    Directory of Open Access Journals (Sweden)

    Cosme Damião Cruz

    1998-03-01

    Full Text Available The main purpose of the GENES software is to help people working with genetic analysis and data processing in breeding programs, using several biometrics models. This software has several help windows that are very friendly to the user. More information about this program is available in the book" Programa GENES - Aplicativo Computacional em Genética e Estatística, 442. 1997". Purchase orders are welcome at the following address: editora@mail.ufv.br. Shareware copies of the GENES software are available at http://www.genetica.dbg.ufv.br.RESUMO O programa GENES é um software destinado à análise e processamento de dados por meio de diferentes modelos biométricos. Seu uso é de grande importância em estudos genéticos aplicados ao melhoramento vegetal e animal, por permitir estimativa de parâmetros para entendimento de fenômenos biológicos e fundamentais em processo de tomada de decisão e na predição do sucesso e viabilidade da estratégia de seleção. O programa pode ser obtido pela rede Internet (http://www.genetica.dbg.ufv.br ou por solicitação pelo endereço: Departamento de Biologia Geral, Universidade Federal de Viçosa, 36571-000 Viçosa, MG, Brasil. O programa conta com telas de ajuda, tornando-o de fácil utilização. Informações adicionais sobre seu uso estão disponíveis no livro" Programa GENES - Aplicativo Computacional em Genética e Estatística, 442, 1997" adquirido por E-mail enviado para editora@mail.ufv.br.

  8. ANÁLISE DOS PARÂMETROS DE FUNCIONAMENTO E SIMULAÇÃO COMPUTACIONAL EM UM PROTÓTIPO DE TURBINA OPERANDO A AR COMPRIMIDO

    Directory of Open Access Journals (Sweden)

    Wesley Saldanha Nogueira Oliveira

    2016-12-01

    Full Text Available RESUMO. Este trabalho apresenta os resultados iniciais da comparação de alguns parâmetros de funcionamento do protótipo de uma turbina de ação simples, operando com ar comprimido como fluido de trabalho, realizados na unidade do SENAI de Barra do Piraí/RJ. O protótipo foi desenvolvido integralmente no Centro Universitário Geraldo Di Biase - UGB - Campus de Barra do Piraí/RJ, que inicialmente teve como o fluido de trabalho o vapor d’água, que nos testes inicias se mostrou ineficiente. A simulação computacional do sistema foi feita através do software SolidWorks na Universidade Federal Fluminense - UFF - Campus de Volta Redonda/RJ, o que permitiu determinar a faixa de trabalho que apresentou maior eficiência nos resultados experimentais do sistema por meio da relação entre pressão de operação, velocidade angular transmitida pelo fluido ao eixo da turbina e a potência da mesma. Os valores nas simulações em velocidade tangencial na região rotacional apresentaram-se convergentes aos valores experimentais medidos no protótipo.

  9. COMPUTER HARDWARE MARKING

    CERN Multimedia

    Groupe de protection des biens

    2000-01-01

    As part of the campaign to protect CERN property and for insurance reasons, all computer hardware belonging to the Organization must be marked with the words 'PROPRIETE CERN'.IT Division has recently introduced a new marking system that is both economical and easy to use. From now on all desktop hardware (PCs, Macintoshes, printers) issued by IT Division with a value equal to or exceeding 500 CHF will be marked using this new system.For equipment that is already installed but not yet marked, including UNIX workstations and X terminals, IT Division's Desktop Support Service offers the following services free of charge:Equipment-marking wherever the Service is called out to perform other work (please submit all work requests to the IT Helpdesk on 78888 or helpdesk@cern.ch; for unavoidable operational reasons, the Desktop Support Service will only respond to marking requests when these coincide with requests for other work such as repairs, system upgrades, etc.);Training of personnel designated by Division Leade...

  10. Foundations of hardware IP protection

    CERN Document Server

    Torres, Lionel

    2017-01-01

    This book provides a comprehensive and up-to-date guide to the design of security-hardened, hardware intellectual property (IP). Readers will learn how IP can be threatened, as well as protected, by using means such as hardware obfuscation/camouflaging, watermarking, fingerprinting (PUF), functional locking, remote activation, hidden transmission of data, hardware Trojan detection, protection against hardware Trojan, use of secure element, ultra-lightweight cryptography, and digital rights management. This book serves as a single-source reference to design space exploration of hardware security and IP protection. · Provides readers with a comprehensive overview of hardware intellectual property (IP) security, describing threat models and presenting means of protection, from integrated circuit layout to digital rights management of IP; · Enables readers to transpose techniques fundamental to digital rights management (DRM) to the realm of hardware IP security; · Introduce designers to the concept of salutar...

  11. Open hardware for open science

    CERN Multimedia

    CERN Bulletin

    2011-01-01

    Inspired by the open source software movement, the Open Hardware Repository was created to enable hardware developers to share the results of their R&D activities. The recently published CERN Open Hardware Licence offers the legal framework to support this knowledge and technology exchange.   Two years ago, a group of electronics designers led by Javier Serrano, a CERN engineer, working in experimental physics laboratories created the Open Hardware Repository (OHR). This project was initiated in order to facilitate the exchange of hardware designs across the community in line with the ideals of “open science”. The main objectives include avoiding duplication of effort by sharing results across different teams that might be working on the same need. “For hardware developers, the advantages of open hardware are numerous. For example, it is a great learning tool for technologies some developers would not otherwise master, and it avoids unnecessary work if someone ha...

  12. MORPION: a fast hardware processor for straight line finding in MWPC

    International Nuclear Information System (INIS)

    Mur, M.

    1980-02-01

    A fast hardware processor for straight line finding in MWPC has been built in Saclay and successfully operated in the NA3 experiment at CERN. We give the motivations to build this processor, and describe the hardware implementation of the line finding algorithm. Finally its use and performance in NA3 are described

  13. Sistema computacional para dosimetria de nêutrons e fótons baseado em métodos estocásticos aplicado a radioterapia e radiologia

    Directory of Open Access Journals (Sweden)

    Bruno Machado Trindade

    2011-04-01

    Full Text Available OBJETIVO: Este artigo mostra um procedimento de conversão de imagens de tomografia computadorizada ou de ressonância magnética em modelo de voxels tridimensional para fim de dosimetria. Este modelo é uma representação personalizada do paciente que pode ser usado na simulação, via código MCNP (Monte Carlo N-Particle, de transporte de partículas nucleares, reproduzindo o processo estocástico de interação de partículas nucleares com os tecidos humanos. MATERIAIS E MÉTODOS: O sistema computacional desenvolvido, denominado SISCODES, é uma ferramenta para planejamento computacional tridimensional de tratamentos radioterápicos ou procedimentos radiológicos. Partindo de imagens tomográficas do paciente, o plano de tratamento é modelado e simulado. São então mostradas as doses absorvidas, por meio de curvas de isodoses superpostas ao modelo. O SISCODES acopla o modelo tridimensional ao código MCNP5, que simula o protocolo de exposição à radiação ionizante. RESULTADOS: O SISCODES vem sendo utilizado no grupo de pesquisa NRI/CNPq na criação de modelos de voxels antropomórficos e antropométricos que são acoplados ao código MCNP para modelar braquiterapias e teleterapias aplicadas a tumores em pulmões, pelve, coluna, cabeça, pescoço, e outros. Os módulos atualmente desenvolvidos no SISCODES são apresentados junto com casos exemplos de planejamento radioterápico. CONCLUSÃO: O SISCODES provê de maneira rápida a criação de modelos de voxels personalizados de qualquer paciente que podem ser usados em simulações por códigos estocásticos tipo MCNP. A combinação da simulação via MCNP com um modelo personalizado do paciente traz grandes melhorias na dosimetria de tratamentos radioterápicos.

  14. SIMULACION COMPUTACIONAL DE PROCESOS DE CONGELACION Y DESHIDRATACION PARA ALIMENTOS SOLIDOS POROSOS Y LIQUIDOS NO NEWTONIANOS

    OpenAIRE

    LEMUS MONDACA, ROBERTO ALEJANDRO

    2012-01-01

    Esta Tesis Doctoral corresponde a parte de las actividades de los Proyectos FONDECYT 1070186 y 1111067, donde se utiliza la modelación matemática y la simulación computacional para describir los fenómenos de transporte de fluidos, calor y masa que ocurren en los procesos de congelación y deshidratación de alimentos sólidos porosos y líquidos no Newtonianos. En paralelo se estudian diferentes características de gran complejidad para cada proceso térmico, como son: uso de modelos conjugados ...

  15. La Energía del Cerebro Humano Autolimita su Poder Computacional

    Directory of Open Access Journals (Sweden)

    Mario Camacho Pinto

    1990-08-01

    Full Text Available

    Los expertos en informática no consideran apropiado seguir preguntando cuántos Mips o cuántos Megaflops constituyen la capacidad de ejecución del cerebro a ejemplo de un Supercomputador Cray o de un IBM Pc, sino cuántas operaciones computacionales puede ejecutar el encéfalo en fa unidad de tiempo o sea su poder computacional.

    El enfoque neurofisiológico involucra tres aspectos contributorios para una respuesta positiva, a saber:

    l. Poder computacional de las Synapsis intemeuronales.
    2. Poder computacional de la retina como punto de referencia.
    3. Medición de la energía total gastada por el cerebro en la unidad de tiempo.

    l. Poder computacional de las Synapsis.
    Engloba así mismo 3 premisas a saber:

    a. El cerebro no puede “computar” si la programación de las señales NO se efectúa mediante el transporte de una Synapsis a la siguiente por el sofisticado mecanismo electroquímico que requiere una determinada cantidad de energía que limita su poder, como veremos adelante.

    b. Este transporte toma tiempo que ha sido posible calcular en relación con la distancia total que todos los impulsos nerviosos tienen que recorrer, tiempo que se ha estimado en un segundo por cada diez impulsos.

    c. El número de Synapsis actuantes se calcula en 10 15.

    Así el total de “operaciones” será el resultado de la relación del número de Synapsis en juego con la distancia que tenga que recorrer el impulso nervioso y su velocidad de operancia.

    Entonces como hay aproximadamente 1015 Synapsis operando a 10 impulsos por segundo, el resultado crudo sería evaluado en 1016 operaciones synapsiales del cerebro en la unidad de tiempo...

  16. Desenvolvimento de uma ferramenta computacional 1-D para uso em projeto de turbinas a vapor

    OpenAIRE

    Fábio Santos Nascimento

    2012-01-01

    O presente trabalho trata do projeto aerotermodinâmico de turbinas a vapor de múltiplos estágios baseado em uma abordagem de técnicas de modelamento unidimensional. Um programa computacional foi desenvolvido para projeto preliminar de turbinas a vapor de múltiplos estágios escritos em linguagem FORTRAN 90, visando à redução do tempo de projeto preliminar, a independência de programas comerciais e o acesso ao código fonte para modificar e implementar novos modelos para melhorar o potencial da ...

  17. Hardware Support for Embedded Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    The general Java runtime environment is resource hungry and unfriendly for real-time systems. To reduce the resource consumption of Java in embedded systems, direct hardware support of the language is a valuable option. Furthermore, an implementation of the Java virtual machine in hardware enables...... worst-case execution time analysis of Java programs. This chapter gives an overview of current approaches to hardware support for embedded and real-time Java....

  18. HARDWARE TROJAN IDENTIFICATION AND DETECTION

    OpenAIRE

    Samer Moein; Fayez Gebali; T. Aaron Gulliver; Abdulrahman Alkandari

    2017-01-01

    ABSTRACT The majority of techniques developed to detect hardware trojans are based on specific attributes. Further, the ad hoc approaches employed to design methods for trojan detection are largely ineffective. Hardware trojans have a number of attributes which can be used to systematically develop detection techniques. Based on this concept, a detailed examination of current trojan detection techniques and the characteristics of existing hardware trojans is presented. This is used to dev...

  19. Hardware assisted hypervisor introspection.

    Science.gov (United States)

    Shi, Jiangyong; Yang, Yuexiang; Tang, Chuan

    2016-01-01

    In this paper, we introduce hypervisor introspection, an out-of-box way to monitor the execution of hypervisors. Similar to virtual machine introspection which has been proposed to protect virtual machines in an out-of-box way over the past decade, hypervisor introspection can be used to protect hypervisors which are the basis of cloud security. Virtual machine introspection tools are usually deployed either in hypervisor or in privileged virtual machines, which might also be compromised. By utilizing hardware support including nested virtualization, EPT protection and #BP, we are able to monitor all hypercalls belongs to the virtual machines of one hypervisor, include that of privileged virtual machine and even when the hypervisor is compromised. What's more, hypercall injection method is used to simulate hypercall-based attacks and evaluate the performance of our method. Experiment results show that our method can effectively detect hypercall-based attacks with some performance cost. Lastly, we discuss our furture approaches of reducing the performance cost and preventing the compromised hypervisor from detecting the existence of our introspector, in addition with some new scenarios to apply our hypervisor introspection system.

  20. LHCb: Hardware Data Injector

    CERN Multimedia

    Delord, V; Neufeld, N

    2009-01-01

    The LHCb High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 1 MHz of events, which have been selected previously by the first-level hardware trigger. The selected events are consolidated into files and then sent to permanent storage for subsequent analysis on the Grid. The goal of the upgrade of the LHCb readout is to lift the limitation to 1 MHz. This means speeding up the DAQ to 40 MHz. Such a DAQ system will certainly employ 10 Gigabit or technologies and might also need new networking protocols: a customized TCP or proprietary solutions. A test module is being presented, which integrates in the existing LHCb infrastructure. It is a 10-Gigabit traffic generator, flexible enough to generate LHCb's raw data packets using dummy data or simulated data. These data are seen as real data coming from sub-detectors by the DAQ. The implementation is based on an FPGA using 10 Gigabit Ethernet interface. This module is integrated in the experiment control system. The architecture, ...

  1. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  2. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  3. Sintaxe X-barra: uma aplicação computacional

    Directory of Open Access Journals (Sweden)

    Gabriel de Ávila Othero

    2009-04-01

    Full Text Available http://dx.doi.org/10.5007/1984-8420.2008v9nespp15 Neste trabalho, apresentaremos uma aplicação computacional da teoria X-barra (cf. HAEGEMAN, 1994; MIOTO et al., 2004, através do programa Grammar Play, um parser sintático em Prolog. O Grammar Play analisa sentenças declarativas simples do português brasileiro, identificando sua estrutura de constituintes. Sua gramática é implementada em Prolog, com o recurso das DCGs, e é baseada nos moldes propostos pela teoria X-barra. O parser é uma primeira tentativa de expandir a cobertura de analisadores semelhantes, como o esboçado em Pagani (2004 e Othero (2004. Os objetivos que guiam a presente versão do Grammar Play são o de implementar computacionalmente modelos lingüísticos coerentes aplicados à descrição do português e o de criar uma ferramenta computacional que possa ser usada didaticamente em aulas de introdução à sintaxe e lingüística, por exemplo.

  4. Secure coupling of hardware components

    NARCIS (Netherlands)

    Hoepman, J.H.; Joosten, H.J.M.; Knobbe, J.W.

    2011-01-01

    A method and a system for securing communication between at least a first and a second hardware components of a mobile device is described. The method includes establishing a first shared secret between the first and the second hardware components during an initialization of the mobile device and,

  5. Prótese para substituição total de disco intervertebral: desenvolvimento de modelo computacional e análise por elementos finitos Prótese de reemplazo total del disco intervertebral: modelo de desarrollo computacional y análisis de elementos finitos Prosthesis for total intervertebral disc replacement: computacional model development and finite element analysis

    Directory of Open Access Journals (Sweden)

    Tiago Nunes Campello

    2009-03-01

    Full Text Available INTRODUÇÃO: a idéia de um disco intervertebral artificial não é nova. O campo de estudos sobre próteses para artroplastia de coluna desenvolve-se ao passo que novas tecnologias na área de materiais e engenharia médica são desenvolvidas ou introduzidas para o surgimento de novos projetos. OBJETIVO: estabelecer metodologia de desenvolvimento de produto em um projeto de prótese para substituição total de disco intervertebral pela utilização de ferramentas computacionais de engenharia. MÉTODOS: a metodologia de desenvolvimento de prótese para substituição total de disco iniciou-se com a definição do seu modelo virtual, seguida pela a análise mecânica virtual por elementos finitos. RESULTADOS: a prótese de disco foi concebida com três componentes, sendo eles o flange superior, o flange inferior e o núcleo. Aplicando o critério de von Mises para solução da análise virtual, verificou-se que o núcleo da prótese é o componente mais solicitado durante compressão axial e compressão/cisalhamento. CONCLUSÃO: este estudo demonstra a viabilidade do desenvolvimento de um projeto para fabricação de prótese para substituição total de disco intervertebral, por meio de metodologia computacional já consagrada em projetos mecânicos de engenharia, principalmente, nos ramos automotivo e aeronáutico.INTRODUCCIÓN: la idea de un disco intervertebral artificial no es nueva. Los estudios de la columna vertebral artroplastia prótesis están en desarrollo, mientras las nuevas tecnologías en el ámbito de la ingeniería y los materiales médicos son desarr llados o introducidos a la aparición de nuevos proyectos. OBJETIVO: establecer una metodología para el desarrollo de producto en un proyecto de prótesis para reemplazo total del disco intervertebral, a través del uso de herramientas de ingeniería computacional. MÉTODOS: la metodología de desarrollo de prótesis para reemplazo total del disco se inició con la definici

  6. DelPapa - Aplicativo computacional para a análise de dados de experimentos no delineamento blocos ao acaso, usando o método Papadakis

    Directory of Open Access Journals (Sweden)

    Lindolfo Storck

    2015-05-01

    Full Text Available O aplicativo computacional para a análise de dados de experimentos executados no delineamento blocos ao acaso, por meio do método usual e de Papadakis, foi desenvolvido em sua primeira versão (não publicada, na linguagem de programação Pascal. Considerando que o método de Papadakis foi eficiente para as principais culturas agrícolas (milho, soja, feijão e trigo e, para tornar o aplicativo mais amigável, a versão em Pascal foi reprogramada em Java, cuja denominação é DelPapa. Este aplicativo realiza a análise de variância segundo o delineamento blocos ao acaso pelo método usual (estima parâmetros genéticos, medidas de qualidade experimental e testes dos pressupostos da análise de variância e pelo método de Papadakis. Usando as médias ajustadas pela covariável (média dos erros das parcelas vizinhas, também realiza o teste Scott e Knott (P=0,05 para agrupar os tratamentos.

  7. NDAS Hardware Translation Layer Development

    Science.gov (United States)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  8. Hardware for dynamic quantum computing.

    Science.gov (United States)

    Ryan, Colm A; Johnson, Blake R; Ristè, Diego; Donovan, Brian; Ohki, Thomas A

    2017-10-01

    We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fed back or fed forward within a fraction of the qubits' coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout, we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow in a fraction of superconducting qubit coherence times. Both readout and control platforms make extensive use of field programmable gate arrays to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.

  9. Soporte computacional para administración integrada de redes y servicios

    Directory of Open Access Journals (Sweden)

    José Nelson Pérez Castillo

    1996-05-01

    Full Text Available Este artículo contextualiza la problemática actual de la administración de redes de comunicaciones considerando las profundas repercusiones, tanto de la internacionalización de la economía como del ritmo impetuoso del avance tecnológico sobre el sector teleinformático y la calidad en la prestación de los servicios de comunicaciones. Reseña someramente las actividades de los distintos frentes de estandarización; en particular la ISO, la ITU- T y la Internet. Se muestran, entonces, las características del soporte computacional para el desarrollo de aplicaciones, señalando los servicios de comunicaciones, interfaz gráfica y bases de datos teniendo en cuenta la naturaleza distribuida de la administración de redes. Finalmente, se muestran los atributos generales de las herramientas de desarrollo disponibles en la actualidad.

  10. La Neurociencia Computacional hoy: II. El Proyecto Blue Brain, un ejemplo muy representativo en el campo

    Directory of Open Access Journals (Sweden)

    Jesús Cortés

    2009-01-01

    Full Text Available La Neurociencia Computacional es un campo reciente, pero bien establecido dentro de las Neurociencias. En un primer artículo (Cortés, 2009, http://www.cienciacognitiva.org/?p=55, “Qué es y por qué es difícil su estudio”, explico su principal paradigma: todo proceso mental que tiene lugar en nuestro cerebro tiene un circuito o cableado físico que lo sustenta. En este artículo comento un ejemplo muy representativo en el campo: el macro-proyecto de simulación a gran escala y en tiempo real de procesos en la corteza cerebral, el famoso Blue Brain Project.

  11. Análise computacional da compactação da cromatina de espermatozoides de galo Computational analysis of chromatin condensation of rooster spermatozoa

    Directory of Open Access Journals (Sweden)

    A.C.N. Rodrigues

    2009-12-01

    Full Text Available Testaram-se variantes metodológicas utilizando azul de toluidina (AT, até se estabelecer um protocolo confiável para a avaliação computacional da compactação da cromatina em espermatozoides de galo. Para tal, foram utilizados sêmen de 10 galos com 35 semanas de idade e sêmen de 10 galos com 60 semanas de idade. O melhor método foi o de hidrólise com ácido clorídrico 1N por 10 minutos, coloração em cubeta com AT 0,025%, pH 4,0, por 20 minutos, desidratação em álcool, diafanização em xilol e montagem com bálsamo do Canadá. Todas as amostras de sêmen foram submetidas a este protocolo e posteriormente avaliadas por análise de imagem computacional, em que foram feitas mensurações da área, comprimento, largura, perímetro, homogeneidade da compactação da cromatina dentro de cada cabeça e intensidade de compactação da cromatina. Os espermatozoides de galos velhos apresentaram mais alterações na cromatina que os de galos jovens. Os galos jovens apresentaram cabeça dos espermatozoides maior que os galos mais velhos. A análise computacional da compactação da cromatina mostrou-se um método menos subjetivo e mais preciso que a avaliação visual das cabeças dos espermatozoides.The methodological variants using toluidina blue (AT to establish a trustworthy protocol for the computational analysis of chromatin condensation of rooster spermatozoa were studied. Twenty semen samples were used: ten from 35-week-old roosters and ten from 60-week-old roosters. Different methods of denaturation and staining were tested. The best method was hydrolysis with 1N HCl for 10 minutes, staining in bucket with 0.025% AT, pH 4.0, for 20 minutes, dehydration in alcohol, clearing in xylol, and mounted with Canada balsam. All the semen samples were submitted to this protocol and later evaluated by computational image analysis. Area, length, width, perimeter, and chromatin compaction homogeneity of head spermatozoa were measured. The sperm

  12. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  13. Raspberry Pi hardware projects 1

    CERN Document Server

    Robinson, Andrew

    2013-01-01

    Learn how to take full advantage of all of Raspberry Pi's amazing features and functions-and have a blast doing it! Congratulations on becoming a proud owner of a Raspberry Pi, the credit-card-sized computer! If you're ready to dive in and start finding out what this amazing little gizmo is really capable of, this ebook is for you. Taken from the forthcoming Raspberry Pi Projects, Raspberry Pi Hardware Projects 1 contains three cool hardware projects that let you have fun with the Raspberry Pi while developing your Raspberry Pi skills. The authors - PiFace inventor, Andrew Robinson and Rasp

  14. Hardware standardization for embedded systems

    International Nuclear Information System (INIS)

    Sharma, M.K.; Kalra, Mohit; Patil, M.B.; Mohanty, Ashutos; Ganesh, G.; Biswas, B.B.

    2010-01-01

    Reactor Control Division (RCnD) has been one of the main designers of safety and safety related systems for power reactors. These systems have been built using in-house developed hardware. Since the present set of hardware was designed long ago, a need was felt to design a new family of hardware boards. A Working Group on Electronics Hardware Standardization (WG-EHS) was formed with an objective to develop a family of boards, which is general purpose enough to meet the requirements of the system designers/end users. RCnD undertook the responsibility of design, fabrication and testing of boards for embedded systems. VME and a proprietary I/O bus were selected as the two system buses. The boards have been designed based on present day technology and components. The intelligence of these boards has been implemented on FPGA/CPLD using VHDL. This paper outlines the various boards that have been developed with a brief description. (author)

  15. Commodity hardware and software summary

    International Nuclear Information System (INIS)

    Wolbers, S.

    1997-04-01

    A review is given of the talks and papers presented in the Commodity Hardware and Software Session at the CHEP97 conference. An examination of the trends leading to the consideration of PC's for HEP is given, and a status of the work that is being done at various HEP labs and Universities is given

  16. Rasgos y clases de la estructura eventiva: Hacia una representación computacional

    Directory of Open Access Journals (Sweden)

    Juan Aparicio

    2012-01-01

    Full Text Available La investigación que actualmente se está llevando a cabo en el área del Procesamiento del Lenguaje Natural está todavía lejos de conseguir niveles profundos de compresión del lenguaje. Para poder construir sistemas inteligentes que traten con la representación del significado, es necesario en el ámbito de las tecnologías del lenguaje, la creación de recursos semánticos de amplia cobertura. El objetivo principal de nuestra investigación es el establecimiento de clases para la representación eventiva en un sistema computacional. La unidad básica de representación es el rasgo, concretamente hemos considerado cuatro rasgos: dinamicidad,  telicidad, duración y gradualidad. A partir de la combinación de estos rasgos semánticos hemos establecido un conjunto de clases eventivas que nos permite caracterizar el comportamiento verbal. Para establecer estas clases hemos tenido en cuenta los posibles cambios eventivos que puede sufrir una unidad verbal según el contexto, representando así la composicionalidad del significado eventivo. Por ello hemos considerado la prototipicidad de los sentidos verbales, y la sensibilidad de las diferentes clases al contexto. El total de clases definidas se clasifica en dos grupos, las clases simples: estados, procesos y puntos, cuya combinación da lugar a las clases complejas: culminaciones, realizaciones y graduales.

  17. “HERRAMIENTA COMPUTACIONAL EDUCATIVA PARA EL APRENDIZAJE DE SISTEMAS DIFUSOS”

    Directory of Open Access Journals (Sweden)

    Edwar Jacinto Gómez

    2011-11-01

    Full Text Available Este artículo presenta el software  Fuzzy Tool desarrollado como una herramienta educativa para el aprendizaje de sistemas difusos, define los métodos y operaciones difusos más importantes y utilizadas en el diseño de sistemas difusos en el mundo. FuzzyTool se implementó como una herramienta computacional educativa teniendo como referencia cada una de lo programas que se encuentran en la actualidad para el desarrollo, diseño y simulación de sistemas y controladores difusas (fuzzytech, Matlab, Unfuzzy, etc..La herramienta que se desarrollo le entrega al usuario una interfaz gráfica donde es guiado en el diseño de los sistemas difusos, por medio de tres pasos fundamentales como lo son: la edición de las variables, la construcción de la base de reglas o base de conocimiento y por ultimo, la inferencia de reglas.

  18. Dinámica de fluidos computacional aplicada al estudio de regeneradores térmicos

    Directory of Open Access Journals (Sweden)

    Cesar Nieto Lodoño

    2004-01-01

    Full Text Available En el presente artículo, muestra los resultados logrados durante la simulación de un regenerador térmico de lecho poroso empacado, sometido a convección forzada transitoria y su respectiva verificación experimental. Para visualizar la aplicación de la Dinámica de Fluidos Computacional (en inglés Computational Fluid Dynamics, CFD en regeneradores de calor, se realiza un estudio detallado de los elementos que conforman la malla; analizando su distribución, tamaño y respectivo efecto sobre la precisión de los resultados. Se establecen las simplificaciones y alcances de los modelos empleados. Se comprueba la veracidad de los resultados obtenidos, mediante la validación experimental de estos en un modelo físico idéntico al empleado durante la simulación. Estas etapas permitieron observar que el comportamiento exponencial de la temperatura en los elementos empacados durante el periodo de calentamiento, fue idéntico al observado por Mejía [8]. Los resultados obtenidos aquí, verifican la capacidad de la CFD para el estudio de los regeneradores térmicos.

  19. BIOLOGICALLY INSPIRED HARDWARE CELL ARCHITECTURE

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed is a system comprising: - a reconfigurable hardware platform; - a plurality of hardware units defined as cells adapted to be programmed to provide self-organization and self-maintenance of the system by means of implementing a program expressed in a programming language defined as DNA...... language, where each cell is adapted to communicate with one or more other cells in the system, and where the system further comprises a converter program adapted to convert keywords from the DNA language to a binary DNA code; where the self-organisation comprises that the DNA code is transmitted to one...... or more of the cells, and each of the one or more cells is adapted to determine its function in the system; where if a fault occurs in a first cell and the first cell ceases to perform its function, self-maintenance is performed by that the system transmits information to the cells that the first cell has...

  20. Hardware-Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S.; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester

  1. The principles of computer hardware

    CERN Document Server

    Clements, Alan

    2000-01-01

    Principles of Computer Hardware, now in its third edition, provides a first course in computer architecture or computer organization for undergraduates. The book covers the core topics of such a course, including Boolean algebra and logic design; number bases and binary arithmetic; the CPU; assembly language; memory systems; and input/output methods and devices. It then goes on to cover the related topics of computer peripherals such as printers; the hardware aspects of the operating system; and data communications, and hence provides a broader overview of the subject. Its readable, tutorial-based approach makes it an accessible introduction to the subject. The book has extensive in-depth coverage of two microprocessors, one of which (the 68000) is widely used in education. All chapters in the new edition have been updated. Major updates include: powerful software simulations of digital systems to accompany the chapters on digital design; a tutorial-based introduction to assembly language, including many exam...

  2. La Neurociencia Computacional hoy: I. Qué es y por qué es difícil su estudio

    Directory of Open Access Journals (Sweden)

    Jesús Cortés

    2009-01-01

    Full Text Available La Neurociencia Computacional es una disciplina consolidada, con más de 20 años de desarrollo, y que emplea técnicas muy diversas para entender diferentes computaciones cerebrales. Aquí se introduce brevemente mediante dos artículos. En el primero, “Qué es y por qué es difícil su estudio”, se introducen de forma muy general cuáles son sus objetivos como ciencia y los problemas con los que se encuentra. En el segundo, mediante “Un ejemplo muy representativo en el campo” abordamos su metodología y destacamos la trascendencia que la Neurociencia Computacional está teniendo y tendrá dentro de las Neurociencias.

  3. Hunting for hardware changes in data centres

    International Nuclear Information System (INIS)

    Coelho dos Santos, M; Steers, I; Szebenyi, I; Xafi, A; Barring, O; Bonfillou, E

    2012-01-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  4. Simulação do comportamento mecânico de misturas asfálticas usando um modelo computacional multi-escala

    Directory of Open Access Journals (Sweden)

    Flávio Vasconcelos de Souza

    2009-10-01

    Full Text Available As misturas asfálticas, por serem materiais heterogêneos, possuem comportamento global dependente do comportamento dos constituintes individuais, de suas frações volumétricas e das interações físico-químicas entre os constituintes, dentre outros fatores. Deste modo, para que se possa compreender melhor o comportamento desses materiais, é necessário o uso de metodologias capazes de considerar as características e fenômenos ocorrentes nas escalas menores. Uma metodologia que vem sendo bastante estudada e aplicada na comunidade científica internacional são os chamados modelos multi-escala. O objetivo do presente trabalho é descrever um modelo computacional multi-escala e aplicá-lo à simulação de ensaios comumente usados em misturas asfálticas, quais sejam, os ensaios de compressão diametral e de fadiga por flexão em viga. Para o caso de compressão diametral, os resultados numéricos se mostraram em concordância com os resultados observados experimentalmente. Para o caso de carregamento cíclico, não foi feita uma comparação com experimentos, mas os resultados numéricos mostram a capacidade do modelo em simular qualitativamente os fenômenos de trincamento por fadiga e acúmulo de deformações permanentes.

  5. Desenvolvimento de um sistema computacional para dimensionamento e evolução de rebanhos bovinos Development of software for dimensioning and evolution of bovine herds

    Directory of Open Access Journals (Sweden)

    Marcos Aurélio Lopes

    2000-10-01

    Full Text Available Os objetivos deste estudo foram desenvolver um sistema computacional que efetue o dimensionamento e a evolução de rebanhos bovinos e criar uma ferramenta que possibilite ao usuário efetuar simulações em um sistema de produção de carne e/ou leite. Foi utilizada a linguagem CA Clipper. As rotinas foram desenvolvidas de forma conversacional, com acesso aos diversos programas, por meio de menus auto-explicativos. O sistema desenvolvido pode auxiliar o técnico e o pecuarista no dimensionamento e na evolução de um rebanho bovino com precisão e considerável rapidez; possibilita ao usuário efetuar inúmeras simulações; e constitui -se em importante ferramenta no auxílio da tomada de decisões.The objectives of this study were to develop a software that accomplishes both the dimensioning and evolution of cattle herds and develop a tool which makes it possible for the user to perform simulations with production systems of milk and or beef. The language employed was CA Clipper. The routines have been developed in a conversational form, with an access to the several programs by means of self-explicative menus. The developed system can aid both the technician and the raiser in dimensioning and evolution of a cattle herd with precision and outstanding rapidity; it allows to the user to perform a number of simulations; and it is considered an important tool in the assistance to decision-making.

  6. Remodelagem do sistema computacional para dosimetria em radioterapia por nêutrons e fótons baseado em métodos estocásticos - SISCODES

    OpenAIRE

    Bruno Machado Trindade

    2011-01-01

    O presente trabalho apresenta a remodelagem do Sistema Computacional para Dosimetria em Radioterapia por Nêutrons e Fótons Baseado em Métodos Estocásticos . SISCODES. Para isso mostra a proposta inicial e estado anterior do sistema, as modificações e expansões propostas e executadas, e o estado atual de desenvolvimento do sistema. Melhorias futuras são propostas ao final do trabalho. O SISCODES é um sistema que permite a execução de planejamento computacional 3D em radioterapia, através de si...

  7. Necesidad de un Centro Nacional de Bioinformática y Biología Computacional para Colombia

    Directory of Open Access Journals (Sweden)

    Lyda Raquel Castro

    2010-01-01

    Full Text Available Los principales y más revolucionarios avances de la biología en este siglo se han derivado de la información proveniente de genomas completos de diferentes organismos. Los descubrimientos que se derivan de la genómica están generando un nuevo paradigma en la biología, sustituyendo la era de la biología centrada en los genes por aquella centrada en los genomas. Este nuevo concepto es base para desarrollos de gran potencial e impacto social en diferentes áreas como la medicina, la agricultura y la industria. El éxito en el desarrollo de métodos de última generación para la secuenciación de genomas, la proteómica y todas las “omicas”, ha contribuido al surgimiento de nuevas posibilidades para el análisis de la enorme cantidad de datos que se están generando mediante el uso de herramientas computacionales, dando origen a una nueva rama de estudio conocida como bioinformática o biología computacional. Este trabajo hace una revisión general del desarrollo de la bioinformática y la biología computacional en Colombia. Inicialmente, a modo de comparación, describimos el desarrollo de esta ciencia en otros países latinoamericanos que son reconocidos en el área. Finalmente, se discuten los principales aspectos que van a jugar un papel importante en el futuro de esta ciencia en nuestro país, y que además justifican la necesidad de crear un centro nacional de bioinformática y biología computacional.

  8. Qualification of software and hardware

    International Nuclear Information System (INIS)

    Gossner, S.; Schueller, H.; Gloee, G.

    1987-01-01

    The qualification of on-line process control equipment is subdivided into three areas: 1) materials and structural elements; 2) on-line process-control components and devices; 3) electrical systems (reactor protection and confinement system). Microprocessor-aided process-control equipment are difficult to verify for failure-free function owing to the complexity of the functional structures of the hardware and to the variety of the software feasible for microprocessors. Hence, qualification will make great demands on the inspecting expert. (DG) [de

  9. Door Hardware and Installations; Carpentry: 901894.

    Science.gov (United States)

    Dade County Public Schools, Miami, FL.

    The curriculum guide outlines a course designed to provide instruction in the selection, preparation, and installation of hardware for door assemblies. The course is divided into five blocks of instruction (introduction to doors and hardware, door hardware, exterior doors and jambs, interior doors and jambs, and a quinmester post-test) totaling…

  10. SIMULAÇÃO COMPUTACIONAL PARA PRODUÇÃO DE PASTA DIAMANTADA

    Directory of Open Access Journals (Sweden)

    Elaine Cristina Gonçalves Moreira

    2011-06-01

    Full Text Available O presente trabalho objetiva analisar o processo de produção de Pasta Diamantada, avaliando o número de operadores e máquinas, tempos de produção, alocação de atividades, dentre outros parâmetros importantes para avaliar a dinâmica do sistema e suas regras operacionais. O método utilizado tem por base a técnica de Simulação Computacional Estocástica de Eventos Discretos em virtude das variadas fontes de incertezas e da complexidade operacional relacionada ao processo de produção de Pasta Diamantada. O modelo conceitual deste sistema foi construído a partir da técnica IDEF-SIM e traduzido para o software Arena® 12 Rockwell Automation. O modelo de Simulação elaborado permitiu representar diversos cenários com considerável rapidez e flexibilidade, necessários à implantação da empresa ABRASDI. Este método permitiu identificar problemas e oportunidades de melhoria no processo, antes do início das linhas de produção. As principais medidas de desempenho avaliadas foram a taxa de utilização de operadores e o lead time do processo, considerando como restrições o custo da Pasta Diamantada, a qualidade do produto e o tempo total de produção. Os resultados obtidos a partir das análises demonstram que alguns cenários podem ser considerados ideais, dependendo das necessidades da empresa, tendo em vista que consideráveis ganhos podem ser obtidos com algumas mudanças de parâmetros.

  11. Travel Software using GPU Hardware

    CERN Document Server

    Szalwinski, Chris M; Dimov, Veliko Atanasov; CERN. Geneva. ATS Department

    2015-01-01

    Travel is the main multi-particle tracking code being used at CERN for the beam dynamics calculations through hadron and ion linear accelerators. It uses two routines for the calculation of space charge forces, namely, rings of charges and point-to-point. This report presents the studies to improve the performance of Travel using GPU hardware. The studies showed that the performance of Travel with the point-to-point simulations of space-charge effects can be speeded up at least 72 times using current GPU hardware. Simple recompilation of the source code using an Intel compiler can improve performance at least 4 times without GPU support. The limited memory of the GPU is the bottleneck. Two algorithms were investigated on this point: repeated computation and tiling. The repeating computation algorithm is simpler and is the currently recommended solution. The tiling algorithm was more complicated and degraded performance. Both build and test instructions for the parallelized version of the software are inclu...

  12. Identificação dos parâmetros de design de dutos de luz solar através do emprego da simulação computacional

    Directory of Open Access Journals (Sweden)

    Gandhi Escajadillo Toledo

    2013-12-01

    Full Text Available Os dutos de luz solar podem reduzir o consumo de energia gasta em iluminação, como também melhorar o conforto visual nos ambientes. Para a previsão do desempenho lumínico destes sistemas, recomenda-se a utilização de simulações computacionais, como o programa Troplux. O presente artigo tem como objetivo determinar os parâmetros de design para os dutos de luz solar através da simulação computacional. O método aplicado neste trabalho foi dividido em duas etapas. Na primeira etapa foi comparado o desempenho luminoso de três modelos virtuais de dutos de luz solar, com características ópticas iguais, mas com geometrias diferentes, visando identificar a geometria de duto mais eficiente. Na segunda etapa foi simulado o desempenho do duto de luz eleito, tendo como referência as especificações da CIE (Comissão Internacional de Iluminação para os tipos de céu e incidência solar. Os resultados das simulações foram então empregados para identificar os principais parâmetros de design para o os dutos de luz solar.

  13. Identificação dos parâmetros de design de dutos de luz solar através do emprego da simulação computacional

    Directory of Open Access Journals (Sweden)

    Gandhi Escajadillo Toledo

    2013-08-01

    Full Text Available Os dutos de luz solar podem reduzir o consumo de energia gasta em iluminação, como também melhorar o conforto visual nos ambientes. Para a previsão do desempenho lumínico destes sistemas, recomenda-se a utilização de simulações computacionais, como o programa Troplux. O presente artigo tem como objetivo determinar os parâmetros de design para os dutos de luz solar através da simulação computacional. O método aplicado neste trabalho foi dividido em duas etapas. Na primeira etapa foi comparado o desempenho luminoso de três modelos virtuais de dutos de luz solar, com características ópticas iguais, mas com geometrias diferentes, visando identificar a geometria de duto mais eficiente. Na segunda etapa foi simulado o desempenho do duto de luz eleito, tendo como referência as especificações da CIE (Comissão Internacional de Iluminação para os tipos de céu e incidência solar. Os resultados das simulações foram então empregados para identificar os principais parâmetros de design para o os dutos de luz solar.

  14. N-gramas sintácticos y su uso en la lingüistica computacional

    Directory of Open Access Journals (Sweden)

    Grigori Sidorov

    2013-05-01

    Full Text Available En este artículo, estamos introduciendo un nuevo concepto que se utilizará en la lingüística computacional, se llama n-gramas sintácticos: son n-gramas que se construyen siguiendo un árbol sintáctico. Es equivalente a introducir la información sintáctica en los métodos de aprendizaje automático, que siempre era un problema muy difícil. Discutimos los elementos que pueden formar estos n-gramas: palabras, clases gramaticales (POS tags, nombres de relaciones sintácticas, caracteres. Consideramos dos ejemplos de cómo se puede obtener los n-gramas sintácticos basándonos en un árbol sintáctico, tanto para el español como para el inglés. Adicionalmente, presentamos un modelo más utilizando de solución de problemas de la lingüística computacional, específicamente, el modelo de espacio vectorial.

  15. Modelo computacional para manejo da fertirrigação em sistemas de microirrigação

    Directory of Open Access Journals (Sweden)

    Alex Nunes de Almeida

    2016-04-01

    Full Text Available O uso de fertirrigação é um dos meios utilizado para a aplicação de fertilizantes via água de irrigação, mas caso não se tome cuidados necessários durante os cálculos da fertirrigação a quantidade calculada destes insumos pode apresentar-se inferior ou superior ao necessário, e assim comprometer a produção e/ou o solo. O objetivo do presente trabalho foi desenvolver uma aplicação computacional para o cálculo da quantidade de fertilizantes a ser colocada em um tanque de fertirrigação e da quantidade de tanques necessários para realização do manejo adequado da fertirrigação. O aplicativo foi escrito em Visual Basic utilizando como ferramenta de desenvolvimento o software Visual Studio Community 2015. O modelo computacional desenvolvido utiliza de um grupo de variáveis informadas pelo usuário que possibilitam a obtenção de resultados consistentes. A metodologia de cálculo utilizada pelo aplicativo enfatiza o máximo de aproveitamento dos insumos tornando possível com seu uso a economia de recursos e ainda se apresenta simples e auto instrutivo para o usuário, permitindo assim melhor aproveitamento de suas funções.

  16. Hardware Support for Dynamic Languages

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; Karlsson, Sven; Probst, Christian W.

    2011-01-01

    In recent years, dynamic programming languages have enjoyed increasing popularity. For example, JavaScript has become one of the most popular programming languages on the web. As the complexity of web applications is growing, compute-intensive workloads are increasingly handed off to the client...... side. While a lot of effort is put in increasing the performance of web browsers, we aim for multicore systems with dedicated cores to effectively support dynamic languages. We have designed Tinuso, a highly flexible core for experimentation that is optimized for high performance when implemented...... on FPGA. We composed a scalable multicore configuration where we study how hardware support for software speculation can be used to increase the performance of dynamic languages....

  17. Modelo computacional para la liofilización de alimentos de geometría finita

    Directory of Open Access Journals (Sweden)

    Héctor E. Gómez H.

    2003-01-01

    Full Text Available La liofilización es una técnica de conservación por deshidratación aplicada a productos químicos, farmacéuticos, médicos, biológicos y alimenticios. El proceso es también llamado criodesecación porque consiste primero en congelar un producto húmedo y luego en vaporizar directamente el hielo a baja presión. Este fenómeno es conocido como sublimación y era ya practicado por los incas del Perú, desde el siglo XIII, para conservar papas. Los productos liofilizados, a diferencia de los deshidratados por otras técnicas de secado, conservan prácticamente en 100% su forma y propiedades naturales, tienen mayor vida de anaquel y son fácilmente rehidratables. En el presente trabajo la liofilización es estudiada experimentalmente y caracterizada por medio de un modelo matemático y computacional. Para ello fue utilizada una hortaliza: la papa Solanum tuberosum, variedad blanca, como material para el modelado de la liofilización por contacto. Se prepararon muestras individuales en forma de placas y cilindros finitos a tres espesores diferentes. Se operó a dos presiones de vacío y tres temperaturas de calentamiento. Se obtuvieron así 36 cinéticas de deshidratación y de temperatura del producto en monocapa, con 8 repeticiones cada una. Se propone un modelo de liofilización que considera tres frentes de sublimación que se retiran uniformemente pero a velocidades interdependientes. Para el estudio se recurrió a las leyes de Fick y de Fourier en estado cuasi-estacionario, considerando despreciable el colapso en el producto. Las temperaturas de cada frente de sublimación se consideran variables. El modelo dinámico resultante es un sistema de tres ecuaciones algebraicas no lineales y tres ecuaciones diferenciales ordinarias. Para la solución del sistema de ecuaciones diferenciales-algebraicas se recurrió a los algoritmos acoplados de Runge-Kutta y Newton-Raphson. Para la estimación de los parámetros de transporte o coeficientes

  18. Constructing Hardware in a Scale Embedded Language

    Energy Technology Data Exchange (ETDEWEB)

    2014-08-21

    Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.

  19. Open-source hardware for medical devices.

    Science.gov (United States)

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  20. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search.......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  1. Desarrollo y validación experimental de un modelo computacional de pilas de combustible tipo PEM y su aplicación al análisis de monoceldas

    OpenAIRE

    Iranzo Paricio, Alfredo

    2010-01-01

    La presente tesis doctoral tiene como objetivo fundamental el desarrollo de un modelo computacional para pilas de combustible tipo PEM, que suponga un avance con respecto al estado actual del modelado de pilas de combustible. ... * Desarrollo de un model

  2. Modelo computacional para suporte à decisão em áreas irrigadas. Parte I: Desenvolvimento e análise de sensibilidade Computer model for decision support on irrigated areas Part I: Development and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    João C. F. Borges Júnior

    2008-02-01

    Full Text Available Este trabalho se refere ao desenvolvimento de um modelo computacional para suporte à tomada de decisão, quanto ao planejamento e manejo de projetos de irrigação e/ou drenagem. O modelo computacional, denominado MCID, é aplicável em nível de unidade de produção, gerando informações sobre como diferentes práticas de manejo da irrigação e configurações do sistema de drenagem afetam a produtividade e o retorno financeiro. Essas informações podem ser empregadas em estudos de otimização de padrão de cultivo em nível de propriedade agrícola, em relação ao retorno financeiro e ao uso da água, associados à análise de risco com base em simulações. O balanço hídrico e de sais na zona radicular e as estimativas da profundidade do lençol freático e vazão nos drenos, são conduzidos em base diária. A análise de sensibilidade indicou que os parâmetros de entrada que mais influenciaram o requerimento de irrigação totalizado para o ciclo, foram espaçamento entre drenos, porosidade drenável, número da curva, condutividade hidráulica horizontal do solo saturado, profundidade da camada impermeável e os parâmetros n e alfa do modelo de van Genuchten-Mualem.This paper refers to the development of a decision support model for planning and managing irrigation and/or drainage schemes. The computer model, called MCID, is applicable to a production unit level, generating information on how different irrigation management practices and drainage designs affect crop yield and financial return. This information may be applied in studies of crop patterns at farm level, taking into consideration financial return and water use, associated to risk analysis based on simulations. The water and salt balance in the root zone, as well as the water table depth and drain discharge predictions, are carried out on a daily basis. The sensitivity analysis pointed out that the most influential parameters on the seasonal irrigation requirement

  3. O pensamento computacional e a formação continuada de professores: uma experiência com as TICs

    Directory of Open Access Journals (Sweden)

    Louise Alessandra Santos do Carmo Paz

    2018-03-01

    Full Text Available As novas tecnologias da informação e comunicação (TICs fazem parte do cotidiano dos alunos, mas nem sempre dos professores. Para que haja uma mudança de paradigma no papel do professor, de produtor para mediador do conhecimento é necessário o desenvolvimento de novas competências para ensinar, destacando-se o pensamento computacional. Este artigo relata a experiência de um curso de introdução às novas TICs, que foi ofertado como formação continuada para professores, utilizando uma metodologia baseada no modelo andragônico, colocando-os como colaboradores e criadores do seu saber, corresponsáveis pelos os rumos do seu processo de ensino aprendizagem.

  4. Fatores de risco para doença arterial coronariana em idosos: análise por enfermeiros utilizando ferramenta computacional Factores de riesgo para enfermedad arterial coronaria en ancianos: análisis por enfermeras utilizando herramienta computacional Risk factors for coronary artery disease in the elderly: analysis by nurses using computational tool

    Directory of Open Access Journals (Sweden)

    Silvia Sidnéia da Silva

    2010-12-01

    Full Text Available Este trabalho objetivou analisar a ocorrência dos fatores de risco para doença arterial coronariana em população idosa participante de uma ação comunitária utilizando ferramenta computacional por enfermeiros. Para o desenvolvimento do trabalho utilizou-se uma base de dados coletada em um evento comunitário. As informações se referem a fatores de risco, dados antropométricos, aferição de valores de glicemia, colesterol e pressão arterial, ocorrência de doença cardíaca e outras. A estrutura multidimensional foi elaborada e gerenciada pela ferramenta Analysis Services. A população idosa correspondeu a 40,4% do total, um terço dessa população estava com valores alterados de pressão arterial sistêmica, 53,8% apresentavam índice de massa corporal acima de 25 Kg/m², 40,3% referiram hipertensão e 20,3%, diabetes mellitus. Conclui-se que o controle de fatores de risco para DAC em clientes idosos é essencial e que a tecnologia da informação pode apoiar na tomada de decisões estratégicas de promoção de saúde.Este trabajo tuvo como objetivo analizar la ocurrencia de los factores de riesgo para enfermedad arterial coronaria en la población anciana participante de una acción comunitaria con una herramienta computacional para las enfermeras. Para el desarrollo del trabajo se utilizó una base de datos colectada en un evento comunitario. Las informaciones se refieren a factores de riesgo, datos antropométricos, contraste de valores de glucemia, colesterol y presión arterial, ocurrencia de enfermedad cardiaca y otras. La estructura multidimensional fue elaborada y administrada por la herramienta Analysis Services. La población anciana correspondió a 40,4% del total, un tercio de esta población estaba con valores alterados de presión arterial sistémica, 53,8% presentaban índice de masa corporal arriba de 25Kg/m², 40,4% se referían a hipertensión y 20,3% diabetes mellitus. Se concluye que el control de factores de

  5. Computer hardware description languages - A tutorial

    Science.gov (United States)

    Shiva, S. G.

    1979-01-01

    The paper introduces hardware description languages (HDL) as useful tools for hardware design and documentation. The capabilities and limitations of HDLs are discussed along with the guidelines needed in selecting an appropriate HDL. The directions for future work are provided and attention is given to the implementation of HDLs in microcomputers.

  6. An evaluation of Skylab habitability hardware

    Science.gov (United States)

    Stokes, J.

    1974-01-01

    For effective mission performance, participants in space missions lasting 30-60 days or longer must be provided with hardware to accommodate their personal needs. Such habitability hardware was provided on Skylab. Equipment defined as habitability hardware was that equipment composing the food system, water system, sleep system, waste management system, personal hygiene system, trash management system, and entertainment equipment. Equipment not specifically defined as habitability hardware but which served that function were the Wardroom window, the exercise equipment, and the intercom system, which was occasionally used for private communications. All Skylab habitability hardware generally functioned as intended for the three missions, and most items could be considered as adequate concepts for future flights of similar duration. Specific components were criticized for their shortcomings.

  7. Comparative Modal Analysis of Sieve Hardware Designs

    Science.gov (United States)

    Thompson, Nathaniel

    2012-01-01

    The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.

  8. Victimización y “ondas de choque”: simulación computacional de la propagación del miedo al crimen

    Directory of Open Access Journals (Sweden)

    Manuel Chacón-Mateos

    2017-04-01

    Full Text Available Se presenta un modelo computacional sobre la generación y propagación del miedo al crimen, basado en los impactos creados por la victimización criminal. El objetivo: es evaluar y describir los efectos que pueden tener en la propagación del miedo al crimen un conjunto de determinantes relacionados con la victimización (directa e indirecta: tamaño de la red social, tasa de victimización y tiempo de recuperación en victimizados. El método: utilizado fue la simulación computacional, en la cual se trabajó con cuatro combinaciones de parámetros que representan diferentes situaciones. Los resultados: permiten describir la dinámica de interacción entre los factores determinantes considerados, y reflejan que estos influyen de manera no lineal en la propagación del miedo al crimen.

  9. Transmission delays in hardware clock synchronization

    Science.gov (United States)

    Shin, Kang G.; Ramanathan, P.

    1988-01-01

    Various methods, both with software and hardware, have been proposed to synchronize a set of physical clocks in a system. Software methods are very flexible and economical but suffer an excessive time overhead, whereas hardware methods require no time overhead but are unable to handle transmission delays in clock signals. The effects of nonzero transmission delays in synchronization have been studied extensively in the communication area in the absence of malicious or Byzantine faults. The authors show that it is easy to incorporate the ideas from the communication area into the existing hardware clock synchronization algorithms to take into account the presence of both malicious faults and nonzero transmission delays.

  10. Computational modeling for irrigated agriculture planning. Part II: risk analysis Modelagem computacional para planejamento em agricultura irrigada: Parte II - Análise de risco

    Directory of Open Access Journals (Sweden)

    João C. F. Borges Júnior

    2008-09-01

    Full Text Available Techniques of evaluation of risks coming from inherent uncertainties to the agricultural activity should accompany planning studies. The risk analysis should be carried out by risk simulation using techniques as the Monte Carlo method. This study was carried out to develop a computer program so-called P-RISCO for the application of risky simulations on linear programming models, to apply to a case study, as well to test the results comparatively to the @RISK program. In the risk analysis it was observed that the average of the output variable total net present value, U, was considerably lower than the maximum U value obtained from the linear programming model. It was also verified that the enterprise will be front to expressive risk of shortage of water in the month of April, what doesn't happen for the cropping pattern obtained by the minimization of the irrigation requirement in the months of April in the four years. The scenario analysis indicated that the sale price of the passion fruit crop exercises expressive influence on the financial performance of the enterprise. In the comparative analysis it was verified the equivalence of P-RISCO and @RISK programs in the execution of the risk simulation for the considered scenario.Técnicas de avaliação de riscos procedentes de incertezas inerentes à atividade agrícola devem acompanhar os estudos de planejamento. A análise de risco pode ser desempenhada por meio de simulação, utilizando técnicas como o método de Monte Carlo. Neste trabalho, teve-se o objetivo de desenvolver um programa computacional, denominado P-RISCO, para utilização de simulações de risco em modelos de programação linear, aplicar a um estudo de caso e testar os resultados comparativamente ao programa @RISK. Na análise de risco, observou-se que a média da variável de saída, valor presente líquido total (U, foi consideravelmente inferior ao valor máximo de U obtido no modelo de programação linear. Constatou

  11. UN PROGRAMA PARA CALCULAR LAS REPRESENTACIONES IRREDUCIBLES DE SN, EN LA FORMA SEMINORMAL DE YOUNG 1 MATEMÁTICA COMPUTACIONAL COMO APOYO A LA DOCENCIA

    Directory of Open Access Journals (Sweden)

    Álvaro Duque S.J.

    2002-06-01

    Full Text Available Las matrices de las representaciones irreducibles de un grupo G se usan para el cómputo de la Transformada Generalizada de Fourier de una función definida en G. Existen muchas otras aplicaciones para las representaciones irreducibles de un grupo. Nosotros elaborarnos un software que calcula las matrices de las representacionesirreducibles del grupo simétrico en la forma serninormal de Young. Este programa corre en el Sistema Algebraico Computacional CoCoA.

  12. Hardware-in-the-Loop Testing

    Data.gov (United States)

    Federal Laboratory Consortium — RTC has a suite of Hardware-in-the Loop facilities that include three operational facilities that provide performance assessment and production acceptance testing of...

  13. Hardware device binding and mutual authentication

    Science.gov (United States)

    Hamlet, Jason R; Pierson, Lyndon G

    2014-03-04

    Detection and deterrence of device tampering and subversion by substitution may be achieved by including a cryptographic unit within a computing device for binding multiple hardware devices and mutually authenticating the devices. The cryptographic unit includes a physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generates a binding PUF value. The cryptographic unit uses the binding PUF value during an enrollment phase and subsequent authentication phases. During a subsequent authentication phase, the cryptographic unit uses the binding PUF values of the multiple hardware devices to generate a challenge to send to the other device, and to verify a challenge received from the other device to mutually authenticate the hardware devices.

  14. Implementation of Hardware Accelerators on Zynq

    DEFF Research Database (Denmark)

    Toft, Jakob Kenn

    of the ARM Cortex-9 processor featured on the Zynq SoC, with regard to execution time, power dissipation and energy consumption. The implementation of the hardware accelerators were successful. Use of the Monte Carlo processor resulted in a significant increase in performance. The Telco hardware accelerator......In the recent years it has become obvious that the performance of general purpose processors are having trouble meeting the requirements of high performance computing applications of today. This is partly due to the relatively high power consumption, compared to the performance, of general purpose...... processors, which has made hardware accelerators an essential part of several datacentres and the worlds fastest super-computers. In this work, two different hardware accelerators were implemented on a Xilinx Zynq SoC platform mounted on the ZedBoard platform. The two accelerators are based on two different...

  15. Cooperative communications hardware, channel and PHY

    CERN Document Server

    Dohler, Mischa

    2010-01-01

    Facilitating Cooperation for Wireless Systems Cooperative Communications: Hardware, Channel & PHY focuses on issues pertaining to the PHY layer of wireless communication networks, offering a rigorous taxonomy of this dispersed field, along with a range of application scenarios for cooperative and distributed schemes, demonstrating how these techniques can be employed. The authors discuss hardware, complexity and power consumption issues, which are vital for understanding what can be realized at the PHY layer, showing how wireless channel models differ from more traditional

  16. Designing Secure Systems on Reconfigurable Hardware

    OpenAIRE

    Huffmire, Ted; Brotherton, Brett; Callegari, Nick; Valamehr, Jonathan; White, Jeff; Kastner, Ryan; Sherwood, Ted

    2008-01-01

    The extremely high cost of custom ASIC fabrication makes FPGAs an attractive alternative for deployment of custom hardware. Embedded systems based on reconfigurable hardware integrate many functions onto a single device. Since embedded designers often have no choice but to use soft IP cores obtained from third parties, the cores operate at different trust levels, resulting in mixed trust designs. The goal of this project is to evaluate recently proposed security primitives for reconfigurab...

  17. IDD Archival Hardware Architecture and Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Mendonsa, D; Nekoogar, F; Martz, H

    2008-10-09

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  18. Software for Managing Inventory of Flight Hardware

    Science.gov (United States)

    Salisbury, John; Savage, Scott; Thomas, Shirman

    2003-01-01

    The Flight Hardware Support Request System (FHSRS) is a computer program that relieves engineers at Marshall Space Flight Center (MSFC) of most of the non-engineering administrative burden of managing an inventory of flight hardware. The FHSRS can also be adapted to perform similar functions for other organizations. The FHSRS affords a combination of capabilities, including those formerly provided by three separate programs in purchasing, inventorying, and inspecting hardware. The FHSRS provides a Web-based interface with a server computer that supports a relational database of inventory; electronic routing of requests and approvals; and electronic documentation from initial request through implementation of quality criteria, acquisition, receipt, inspection, storage, and final issue of flight materials and components. The database lists both hardware acquired for current projects and residual hardware from previous projects. The increased visibility of residual flight components provided by the FHSRS has dramatically improved the re-utilization of materials in lieu of new procurements, resulting in a cost savings of over $1.7 million. The FHSRS includes subprograms for manipulating the data in the database, informing of the status of a request or an item of hardware, and searching the database on any physical or other technical characteristic of a component or material. The software structure forces normalization of the data to facilitate inquiries and searches for which users have entered mixed or inconsistent values.

  19. VEG-01: Veggie Hardware Verification Testing

    Science.gov (United States)

    Massa, Gioia; Newsham, Gary; Hummerick, Mary; Morrow, Robert; Wheeler, Raymond

    2013-01-01

    The Veggie plant/vegetable production system is scheduled to fly on ISS at the end of2013. Since much of the technology associated with Veggie has not been previously tested in microgravity, a hardware validation flight was initiated. This test will allow data to be collected about Veggie hardware functionality on ISS, allow crew interactions to be vetted for future improvements, validate the ability of the hardware to grow and sustain plants, and collect data that will be helpful to future Veggie investigators as they develop their payloads. Additionally, food safety data on the lettuce plants grown will be collected to help support the development of a pathway for the crew to safely consume produce grown on orbit. Significant background research has been performed on the Veggie plant growth system, with early tests focusing on the development of the rooting pillow concept, and the selection of fertilizer, rooting medium and plant species. More recent testing has been conducted to integrate the pillow concept into the Veggie hardware and to ensure that adequate water is provided throughout the growth cycle. Seed sanitation protocols have been established for flight, and hardware sanitation between experiments has been studied. Methods for shipping and storage of rooting pillows and the development of crew procedures and crew training videos for plant activities on-orbit have been established. Science verification testing was conducted and lettuce plants were successfully grown in prototype Veggie hardware, microbial samples were taken, plant were harvested, frozen, stored and later analyzed for microbial growth, nutrients, and A TP levels. An additional verification test, prior to the final payload verification testing, is desired to demonstrate similar growth in the flight hardware and also to test a second set of pillows containing zinnia seeds. Issues with root mat water supply are being resolved, with final testing and flight scheduled for later in 2013.

  20. From Open Source Software to Open Source Hardware

    OpenAIRE

    Viseur , Robert

    2012-01-01

    Part 2: Lightning Talks; International audience; The open source software principles progressively give rise to new initiatives for culture (free culture), data (open data) or hardware (open hardware). The open hardware is experiencing a significant growth but the business models and legal aspects are not well known. This paper is dedicated to the economics of open hardware. We define the open hardware concept and determine intellectual property tools we can apply to open hardware, with a str...

  1. Flight Hardware Virtualization for On-Board Science Data Processing

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  2. Non-fuel bearing hardware melting technology

    International Nuclear Information System (INIS)

    Newman, D.F.

    1993-01-01

    Battelle has developed a portable hardware melter concept that would allow spent fuel rod consolidation operations at commercial nuclear power plants to provide significantly more storage space for other spent fuel assemblies in existing pool racks at lower cost. Using low pressure compaction, the non-fuel bearing hardware (NFBH) left over from the removal of spent fuel rods from the stainless steel end fittings and the Zircaloy guide tubes and grid spacers still occupies 1/3 to 2/5 of the volume of the consolidated fuel rod assemblies. Melting the non-fuel bearing hardware reduces its volume by a factor 4 from that achievable with low-pressure compaction. This paper describes: (1) the configuration and design features of Battelle's hardware melter system that permit its portability, (2) the system's throughput capacity, (3) the bases for capital and operating estimates, and (4) the status of NFBH melter demonstration to reduce technical risks for implementation of the concept. Since all NFBH handling and processing operations would be conducted at the reactor site, costs for shipping radioactive hardware to and from a stationary processing facility for volume reduction are avoided. Initial licensing, testing, and installation in the field would follow the successful pattern achieved with rod consolidation technology

  3. Aplicativo computacional para la planeación de la producción en una empresa fabricante de autopartes

    Directory of Open Access Journals (Sweden)

    Andrea Hernández

    2008-11-01

    Full Text Available Este trabajo describe el desarrollo de un aplicativo computacional para la planeación y secuenciación de la producción en una empresa colombiana fabricante de autopartes. El aplicativo integra pronósticos de ventas y órdenes en firme para calcular el plan maestro de producción, que se secuencia en la planta de producción. Un caso de estudio muestra la funcionalidad de la aplicación propuesta y compara los resultados de las secuencias propuestas con las que resultaron de las prácticas actuales. Los resultados muestran que la implementación del aplicativo puede mejorar los niveles de servicio y la satisfacción del cliente si se cumplen ciertos prerrequisitos descritos en el artículo. / This paper describes the development of a computer application for production planning and scheduling in a Colombian auto part company. Such an application integrates sales forecasts and firm orders, to calculate a Master Production Schedule which is validated with a detailed shop floor scheduling plan. A case study illustrates the functionality of the proposed application and compares the computer-calculated production plans with the current practices. The results show that the implementation of the application can improve service levels and customer satisfaction, given some prerequisites described in the paper are met.

  4. ANÁLISE DIGITAL DE TERRENO UTILIZANDO A LINGUAGEM COMPUTACIONAL R: EXEMPLO DE APLICAÇÃO

    Directory of Open Access Journals (Sweden)

    Renê Jota Arruda de Macêdo

    2017-05-01

    Full Text Available Linguagens de programação vem se tornando populares em diversas áreas do meio acadêmico-científico. No âmbito das Geociências, emergem como potenciais ferramentas para a compreensão dos processos naturais da superfície terrestre. Neste trabalho realizou-se uma breve apresentação da linguagem computacional R e uma rápida abordagem a respeito da parametrização de elementos da superfície de uma determinada região a partir de dados discretos espaçados regularmente.  Em seguida, apresenta-se um exemplo de sua aplicação para derivação de parâmetros geométricos e análise digital de terreno (ADT em um modelo digital de elevação com 30 m de resolução espacial. Utilizou-se o software RStudio versão gratuita que oferece um ambiente de desenvolvimento gráfico intuitivo com diversas facilidades para implementação de rotinas. Com uma comunidade colaborativa ativa e aberta, a linguagem R aplicada em ADT permite que usuários iniciantes compreendam os aspectos básicos e visualize todo o processo de implementação do código e análise dos resultados.

  5. A Hardware Abstraction Layer in Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Korsholm, Stephan; Kalibera, Tomas

    2011-01-01

    Embedded systems use specialized hardware devices to interact with their environment, and since they have to be dependable, it is attractive to use a modern, type-safe programming language like Java to develop programs for them. Standard Java, as a platform-independent language, delegates access...... to devices, direct memory access, and interrupt handling to some underlying operating system or kernel, but in the embedded systems domain resources are scarce and a Java Virtual Machine (JVM) without an underlying middleware is an attractive architecture. The contribution of this article is a proposal...... for Java packages with hardware objects and interrupt handlers that interface to such a JVM. We provide implementations of the proposal directly in hardware, as extensions of standard interpreters, and finally with an operating system middleware. The latter solution is mainly seen as a migration path...

  6. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  7. MFTF supervisory control and diagnostics system hardware

    International Nuclear Information System (INIS)

    Butner, D.N.

    1979-01-01

    The Supervisory Control and Diagnostics System (SCDS) for the Mirror Fusion Test Facility (MFTF) is a multiprocessor minicomputer system designed so that for most single-point failures, the hardware may be quickly reconfigured to provide continued operation of the experiment. The system is made up of nine Perkin-Elmer computers - a mixture of 8/32's and 7/32's. Each computer has ports on a shared memory system consisting of two independent shared memory modules. Each processor can signal other processors through hardware external to the shared memory. The system communicates with the Local Control and Instrumentation System, which consists of approximately 65 microprocessors. Each of the six system processors has facilities for communicating with a group of microprocessors; the groups consist of from four to 24 microprocessors. There are hardware switches so that if an SCDS processor communicating with a group of microprocessors fails, another SCDS processor takes over the communication

  8. Representações digitais e interação incorporada: um estudo etnográfico de práticas científicas de modelagem computacional

    Directory of Open Access Journals (Sweden)

    Marko Monteiro

    2009-10-01

    Full Text Available O objetivo deste trabalho é discutir como objetos virtuais participam interativamente da produção do conhecimento na prática científica. O artigo baseia-se numa observação etnográfica de uma equipe interdisciplinar de cientistas, cujo trabalho envolve modelagem computacional de transferência de calor na próstata humana. A etnografia constatou que, embora visualizações científicas sejam pensadas como uma forma de "simplificar" a apreensão de dados, há um intenso trabalho interpretativo necessário para alcançar sentidos compartilhados a respeito das imagens. Tais sentidos são construídos a partir de comunicação oral e de interações incorporadas com objetos virtuais no decorrer das interações entre os cientistas. Uma melhor compreensão dessas práticas interpretativas é importante na medida em que o uso de visualizações digitais em 3D e de modelos computacionais ganha importância na ciência contemporânea. Tais técnicas são crescentemente utilizadas não somente para descrever verdades sobre a natureza, mas como ferramentas poderosas de intervenção no mundo.This text discusses how virtual objects participate interactively in the production of knowledge in scientific practice. The article is based on the ethnographic observation of an interdisciplinary team of scientists whose work involves computer modelling of heat transfer in the human prostate. The ethnography found that although scientific imaging may be considered a form of 'simplifying' the apprehension of data, an intense interpretative process is required to achieve shared meanings concerning the images. These meanings are constructed through oral communication and embodied interactions with virtual objects during the interactions between scientists. A better understanding of these interpretative practices is needed given the growing importance of the use of 3D digital imaging and computational models in contemporary science. These techniques are

  9. Hardware Accelerated Sequence Alignment with Traceback

    Directory of Open Access Journals (Sweden)

    Scott Lloyd

    2009-01-01

    in a timely manner. Known methods to accelerate alignment on reconfigurable hardware only address sequence comparison, limit the sequence length, or exhibit memory and I/O bottlenecks. A space-efficient, global sequence alignment algorithm and architecture is presented that accelerates the forward scan and traceback in hardware without memory and I/O limitations. With 256 processing elements in FPGA technology, a performance gain over 300 times that of a desktop computer is demonstrated on sequence lengths of 16000. For greater performance, the architecture is scalable to more processing elements.

  10. Quantum neuromorphic hardware for quantum artificial intelligence

    Science.gov (United States)

    Prati, Enrico

    2017-08-01

    The development of machine learning methods based on deep learning boosted the field of artificial intelligence towards unprecedented achievements and application in several fields. Such prominent results were made in parallel with the first successful demonstrations of fault tolerant hardware for quantum information processing. To which extent deep learning can take advantage of the existence of a hardware based on qubits behaving as a universal quantum computer is an open question under investigation. Here I review the convergence between the two fields towards implementation of advanced quantum algorithms, including quantum deep learning.

  11. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  12. Difusão de spins nucleares em meios porosos - uma abordagem computacional da RMN

    OpenAIRE

    Éverton Lucas-Oliveira

    2015-01-01

    A Ressonância Magnética Nuclear (RMN) é uma importante técnica empregada nas principais áreas de conhecimento, tais como, Física, Química e Medicina. Importantes trabalhos da RMN aplicada ao estudo da dinâmica de moléculas em fluidos presentes em meios porosos permitiram que esta técnica ganhasse também notoriedade na indústria do petróleo. O presente projeto é fundamentado em alguns destes trabalhos seminais, reproduzindo, através de modelos físico-computacionais, os principais efeitos físic...

  13. Sistema computacional de gerenciamento para acompanhamento de desempenho de máquinas agrícolas instrumentadas com sensores Computer system management for monitoring performance of agricultural machinery instrumented with sensors

    Directory of Open Access Journals (Sweden)

    Oni Reasilvia de Almeida Oliveira Sichonany

    2011-10-01

    Full Text Available O G-SADA é um sistema computacional de gerenciamento que auxilia o gerente da propriedade rural e o operador da máquina agrícola nas tomadas de decisão, informando sobre valores de operações fora dos padrões. O sistema tem como características permitir a o acompanhamento do desempenho da máquina enquanto ela está em operação no campo, com funcionalidades em tempo real, b a mobilidade do usuário por poder ser acessado a partir de qualquer tipo de computador, incluindo dispositivos móveis, como smartphones, e c possuir funcionalidades de acesso à base de dados estática. Um dos resultados deste trabalho é a modelagem de dados e de funções de uma aplicação que utiliza dados armazenados dinamicamente com a máquina em operação no campo, fornecidos por sensores implantados na máquina agrícola, disponibilizando informações em tempo real.The G-SADA is a computer system that assists the management of the farm manager and the operator of agricultural machinery in decision making, reporting values of non- standard operations. The system is characterized by allowing a monitoring the performance of the machine while it is operating in the field, with features in real time, b mobility of the user because it can be accessed from any computer, including mobile devices such as smartphones, c and features with access to the database static. One of the results of this research is the modeling data and functions of an application that uses data stored dynamically with the machine in operation in the field, provided by sensors deployed in agricultural machinery, showing real time information.

  14. Enabling Open Hardware through FOSS tools

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Software developers often take open file formats and tools for granted. When you publish code on github, you do not ask yourself if somebody will be able to open it and modify it. We need the same freedom in the open hardware world, to make it truly accessible for everyone.

  15. Hardware and layout aspects affecting maintainability

    International Nuclear Information System (INIS)

    Jayaraman, V.N.; Surendar, Ch.

    1977-01-01

    It has been found from maintenance experience at the Rajasthan Atomic Power Station that proper hardware and instrumentation layout can reduce maintenance and down-time on the related equipment. The problems faced in this connection and how they were solved is narrated. (M.G.B.)

  16. CAMAC high energy physics electronics hardware

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1977-01-01

    CAMAC hardware for high energy physics large spectrometers and control systems is reviewed as is the development of CAMAC modules at the High Energy Laboratory, JINR (Dubna). The total number of crates used at the Laboratory is 179. The number of CAMAC modules of 120 different types exceeds 1700. The principles of organization and the structure of developed CAMAC systems are described. (author)

  17. Design of hardware accelerators for demanding applications.

    NARCIS (Netherlands)

    Jozwiak, L.; Jan, Y.

    2010-01-01

    This paper focuses on mastering the architecture development of hardware accelerators. It presents the results of our analysis of the main issues that have to be addressed when designing accelerators for modern demanding applications, when using as an example the accelerator design for LDPC decoding

  18. Building Correlators with Many-Core Hardware

    NARCIS (Netherlands)

    van Nieuwpoort, R.V.

    2010-01-01

    Radio telescopes typically consist of multiple receivers whose signals are cross-correlated to filter out noise. A recent trend is to correlate in software instead of custom-built hardware, taking advantage of the flexibility that software solutions offer. Examples include e-VLBI and LOFAR. However,

  19. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  20. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  1. Environmental Control System Software & Hardware Development

    Science.gov (United States)

    Vargas, Daniel Eduardo

    2017-01-01

    ECS hardware: (1) Provides controlled purge to SLS Rocket and Orion spacecraft. (2) Provide mission-focused engineering products and services. ECS software: (1) NASA requires Compact Unique Identifiers (CUIs); fixed-length identifier used to identify information items. (2) CUI structure; composed of nine semantic fields that aid the user in recognizing its purpose.

  2. Digital Hardware Design Teaching: An Alternative Approach

    Science.gov (United States)

    Benkrid, Khaled; Clayton, Thomas

    2012-01-01

    This article presents the design and implementation of a complete review of undergraduate digital hardware design teaching in the School of Engineering at the University of Edinburgh. Four guiding principles have been used in this exercise: learning-outcome driven teaching, deep learning, affordability, and flexibility. This has identified…

  3. The fast Amsterdam multiprocessor (FAMP) system hardware

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Kieft, G.; Kisielewski, B.; Wiggers, L.W.; Engster, C.; Koningsveld, L. van

    1981-01-01

    The architecture of a multiprocessor system is described that will be used for on-line filter and second stage trigger applications. The system is based on the MC 68000 microprocessor from Motorola. Emphasis is paid to hardware aspects, in particular the modularity, processor communication and interfacing, whereas the system software and the applications will be described in separate articles. (orig.)

  4. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  5. Modelo computacional para suporte à decisão em áreas irrigadas. Parte II: testes e aplicação Computer model for decision support in irrigated areas. Part II: tests and application

    Directory of Open Access Journals (Sweden)

    Paulo A. Ferreira

    2006-12-01

    Full Text Available Apresentou-se, na Parte I desta pesquisa, o desenvolvimento de um modelo computacional denominado MCID, para suporte à tomada de decisão quanto ao planejamento e manejo de projetos de irrigação e/ou drenagem. Objetivou-se, na Parte II, testar e aplicar o MCID. No teste comparativo com o programa DRAINMOD, espaçamentos entre drenos, obtidos com o MCID, foram ligeiramente maiores ou idênticos. Os espaçamentos advindos com o MCID e o DRAINMOD foram consideravelmente maiores que os obtidos por meio de metodologias tradicionais de dimensionamento de sistemas de drenagem. A produtividade relativa total, YRT, obtida com o MCID foi, em geral, inferior à conseguida com o DRAINMOD, devido a diferenças de metodologia ao se estimar a produtividade da cultura em resposta ao déficit hídrico. Na comparação com o programa CROPWAT, obtiveram-se resultados muito próximos para (YRT e evapotranspiração real. O modelo desenvolvido foi aplicado para as condições do Projeto Jaíba, MG, para culturas perenes e anuais cultivadas em diferentes épocas. Os resultados dos testes e aplicações indicaram a potencialidade do MCID como ferramenta de apoio à decisão em projetos de irrigação e/ou drenagem.Part I of this research presented the development of a decision support model, called MCID, for planning and managing irrigation and/or drainage projects. Part II is aimed at testing and applying MCID. In a comparative test with the DRAINMOD model, drain spacings obtained with MCID were slightly larger or identical. The spacings obtained with MCID and DRAINMOD were considerably larger than those obtained through traditional methodologies of design of drainage systems. The relative crop yield (YRT obtained with MCID was, in general, lower than the one obtained with DRAINMOD due to differences in the estimate of crop response to water deficit. In comparison with CROPWAT, very close results for YRT and for actual evapotranspiration were obtained. The

  6. Modelo computacional para caracterización de células escamosas de citologías cérvico-uterinas

    OpenAIRE

    Martínez Abaunza Víctor Eduardo; Mendoza Castellanos Alfonso; Uribe Pérez Claudia Janeth

    2005-01-01

    El trabajo se realizó entre el Grupo de Investigación en Ingeniería Biomédica (GIIB) y el Grupo de Investigación en Patología Estructural, Funcional y Clínica de la Universidad Industrial de Santander (UIS), junto con la Facultad de Medicina de la Universidad Autónoma de Bucaramanga (UNAB); el objetivo principal es construir un modelo computacional que permita caracterizar las células presentes en una citología cérvico uterina, con el propósito de clasificarlas como normales o displásicas. La...

  7. LA DINÁMICA DE FLUIDOS COMPUTACIONAL, SU APLICACIÓN AL ESTUDIO DE LAS CARACTERÍSTICAS DE UN INTERCAMBIADOR DE TUBOS TÉRMICOS

    Directory of Open Access Journals (Sweden)

    David Fernández Rivas

    2005-09-01

    Full Text Available Para el estudio y el diseño de un intercambiador de calor del tipo Termosifón se emplea por primera vez enCuba una novedosa técnica de modelación numérica, la Dinámica de Fluidos Computacional, uso quepudiera ahorrar al país numerosos recursos. Este trabajo continua, con nuevos aportes, una largainvestigación que ha tenido como objetivo central, la instalación de tubos térmicos termosifón (ITTT. Lasimulación computacional, es una poderosa herramienta que se emplea para conocer parámetros de interésen la operación de dicha instalación. Se trabaja con programas de Dinámica de Fluidos Computacional(DFC para acortar el tiempo de experimentación y ahorrar recursos materiales y humanos durante elestudio de diferentes variantes de arreglos geométricos de los tubos térmicos. Se logran conocer lasparticularidades que los diferentes arreglos de tubos introducen al proceso de intercambio de calor. Esteresultado brinda una importante contribución para la futura construcción de un termosifón a escalaindustrial. Esta investigación ha suscitado mucho interés por lo que pudiera significar para la eficiencia de lacaldera las nuevas condiciones impuestas por la quema de combustible cubano.Palabras Claves: Dinámica de Fluidos Computacional, Tubos térmicos, termosifón.__________________________________________________________________________AbstractThe Computational Fluids Dynamic it is used for studying a thermosyphon heat exchanger like a noveltechnique of numeric modeling, for the first time in Cuba. This technique could save a lot of resources. Thiswork is the continuation of a long research in the heat pipes technology. Working with Computational FluidsDynamic (DFC programs, consumes less time in the experimentation and allows saving material andhuman resources for these studies. It is possible to know many particularities of different geometries of tubesarrangements. This investigation will guarantee a future thermosyphon

  8. Identificação e estimação de ruído em redes DSL: uma abordagem baseada em inteligência computacional

    OpenAIRE

    FARIAS, Fabrício de Souza

    2012-01-01

    Este trabalho propõe a utilização de técnicas de inteligência computacional objetivando identificar e estimar a potencia de ruídos em redes Digital Subscriber Line ou Linhas do Assinante Digital (DSL) em tempo real. Uma metodologia baseada no Knowledge Discovery in Databases ou Descobrimento de Conhecimento em Bases de Dados (KDD) para detecção e estimação de ruídos em tempo real, foi utilizada. KDD é aplicado para selecionar, pré-processar e transformar os dados antes da etapa de aplicação d...

  9. Os conceitos elementares de estatística a partir do homem vitruviano: uma experiência de ensino em ambiente computacional

    OpenAIRE

    Silva, Edgard Dias da

    2008-01-01

    O objetivo deste trabalho foi investigar as potencialidades de uma intervenção de ensino sobre os conceitos elementares de Estatística com alunos do Ensino Médio, construída a partir de uma visita cultural (exposição de Leonardo Da Vinci), tendo como ferramenta o ambiente computacional. Trata-se de uma pesquisa de cunho quali-quantitativo, que seguiu um modelo quaseexperimental, no formato pré-teste/intervenção/pós-teste, complementada pela análise qualitativa das atividades...

  10. Sistema computacional de realidad aumentada para la solidificación del aprendizaje en la educación básica

    OpenAIRE

    Ponce Tubay, Manuel Alexander; Párraga Muñoz, Sonia Monserrate; Ochoa Parrales, Jhonny Andrés

    2018-01-01

    El desarrollo de una herramienta computacional para la mejora de los diferentes procesos de aprendizaje implantando nuevas TIC con realidad aumenta, para de esta forma despertar un mayor interés e interacción de los alumnos, contribuyendo así con en el proceso de enseñanza aprendizaje en la educación de la Unidad Educativa. La investigación está orientada a mejorar cada uno de los aspectos necesarios para la enseñanza y aprendizaje, agilizar e innovar la manera de aprender con un software com...

  11. Penetrômetro de impacto stolf - programa computacional de dados em EXCEL-VBA

    Directory of Open Access Journals (Sweden)

    Rubismar Stolf

    2014-06-01

    Full Text Available Há dois principais tipos de penetrômetro, o estático e o dinâmico. No primeiro, uma haste com uma ponta cônica é introduzida contínua e lentamente (quase estático, registrando-se concomitantemente a força de reação que é igual à resistência do solo. No segundo, utiliza-se a mesma haste. Contudo, esta é introduzida por meio da promoção de uma massa de impacto em queda livre. Dessa forma, a teoria pode ser tratada pela dinâmica newtoniana para obtenção da resistência. O objetivo do programa é proporcionar uma ferramenta rápida, facilitadora da computação de dados de resistência do solo, para esse último penetrômetro, resultando tabelas e gráficos já no formato científico. Desenvolvido em linguagem de programação Visual Basic Aplication (VBA, escolheu-se o aplicativo Excel como interface com o usuário pela sua popularidade. É constituído por quatro planilhas, duasauxiliares, sendo duas delas essenciais, Plan1 e 2: Plan1 - dados de entrada (número de impactos e profundidade. Concomitante à digitação, a tabela de resistência é confeccionada com o respectivo gráfico, podendo tabelar 40 perfis. Plan2 - cumpre função especial de equalizar, ou seja, padronizar a profundidade em camadas constantes, permitindo unificar todos os perfis em uma única tabela. Para tal, escolhe-se uma espessura de camada (exemplo: 5 cm; em seguida, o programa coleta os dados de resistência (MPa da Plan1 e interpola os valores para a profundidade de 5 em 5 cm. Após realizar esse procedimento para todas as tabelas da Plan1, o programa gera, na Plan2, uma única tabela com todos os perfis, a média geral e os respectivos gráficos. É possível selecionar os perfis; por exemplo, apenas os medidos na linha ou entrelinha de plantio e eliminar perfis a critério do usuário. Como objetivo complementar, descreve-se a evolução do projeto "Penetrômetro de impacto", iniciado em 1982, cuja técnica passou a ser adotada no meio cient

  12. Fuel cell hardware-in-loop

    Energy Technology Data Exchange (ETDEWEB)

    Moore, R.M.; Randolf, G.; Virji, M. [University of Hawaii, Hawaii Natural Energy Institute (United States); Hauer, K.H. [Xcellvision (Germany)

    2006-11-08

    Hardware-in-loop (HiL) methodology is well established in the automotive industry. One typical application is the development and validation of control algorithms for drive systems by simulating the vehicle plus the vehicle environment in combination with specific control hardware as the HiL component. This paper introduces the use of a fuel cell HiL methodology for fuel cell and fuel cell system design and evaluation-where the fuel cell (or stack) is the unique HiL component that requires evaluation and development within the context of a fuel cell system designed for a specific application (e.g., a fuel cell vehicle) in a typical use pattern (e.g., a standard drive cycle). Initial experimental results are presented for the example of a fuel cell within a fuel cell vehicle simulation under a dynamic drive cycle. (author)

  13. Hardware and software status of QCDOC

    International Nuclear Information System (INIS)

    Boyle, P.A.; Chen, D.; Christ, N.H.; Clark, M.; Cohen, S.D.; Cristian, C.; Dong, Z.; Gara, A.; Joo, B.; Jung, C.; Kim, C.; Levkova, L.; Liao, X.; Liu, G.; Mawhinney, R.D.; Ohta, S.; Petrov, K.; Wettig, T.; Yamaguchi, A.

    2004-01-01

    QCDOC is a massively parallel supercomputer whose processing nodes are based on an application-specific integrated circuit (ASIC). This ASIC was custom-designed so that crucial lattice QCD kernels achieve an overall sustained performance of 50% on machines with several 10,000 nodes. This strong scalability, together with low power consumption and a price/performance ratio of $1 per sustained MFlops, enable QCDOC to attack the most demanding lattice QCD problems. The first ASICs became available in June of 2003, and the testing performed so far has shown all systems functioning according to specification. We review the hardware and software status of QCDOC and present performance figures obtained in real hardware as well as in simulation

  14. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  15. A Scalable Approach for Hardware Semiformal Verification

    OpenAIRE

    Grimm, Tomas; Lettnin, Djones; Hübner, Michael

    2018-01-01

    The current verification flow of complex systems uses different engines synergistically: virtual prototyping, formal verification, simulation, emulation and FPGA prototyping. However, none is able to verify a complete architecture. Furthermore, hybrid approaches aiming at complete verification use techniques that lower the overall complexity by increasing the abstraction level. This work focuses on the verification of complex systems at the RT level to handle the hardware peculiarities. Our r...

  16. Hardware Design of a Smart Meter

    OpenAIRE

    Ganiyu A. Ajenikoko; Anthony A. Olaomi

    2014-01-01

    Smart meters are electronic measurement devices used by utilities to communicate information for billing customers and operating their electric systems. This paper presents the hardware design of a smart meter. Sensing and circuit protection circuits are included in the design of the smart meter in which resistors are naturally a fundamental part of the electronic design. Smart meters provides a route for energy savings, real-time pricing, automated data collection and elimina...

  17. Optimization Strategies for Hardware-Based Cofactorization

    Science.gov (United States)

    Loebenberger, Daniel; Putzka, Jens

    We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

  18. Particle Transport Simulation on Heterogeneous Hardware

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    CPUs and GPGPUs. About the speaker Vladimir Koylazov is CTO and founder of Chaos Software and one of the original developers of the V-Ray raytracing software. Passionate about 3D graphics and programming, Vlado is the driving force behind Chaos Group's software solutions. He participated in the implementation of algorithms for accurate light simulations and support for different hardware platforms, including CPU and GPGPU, as well as distributed calculat...

  19. High exposure rate hardware ALARA plan

    International Nuclear Information System (INIS)

    Nellesen, A.L.

    1996-10-01

    This as low as reasonably achievable review provides a description of the engineering and administrative controls used to manage personnel exposure and to control contamination levels and airborne radioactivity concentrations. HERH waste is hardware found in the N-Fuel Storage Basin, which has a contact dose rate greater than 1 R/hr and used filters. This waste will be collected in the fuel baskets at various locations in the basins

  20. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  1. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  2. A Hardware Lab Anywhere At Any Time

    Directory of Open Access Journals (Sweden)

    Tobias Schubert

    2004-12-01

    Full Text Available Scientific technical courses are an important component in any student's education. These courses are usually characterised by the fact that the students execute experiments in special laboratories. This leads to extremely high costs and a reduction in the maximum number of possible participants. From this traditional point of view, it doesn't seem possible to realise the concepts of a Virtual University in the context of sophisticated technical courses since the students must be "on the spot". In this paper we introduce the so-called Mobile Hardware Lab which makes student participation possible at any time and from any place. This lab nevertheless transfers a feeling of being present in a laboratory. This is accomplished with a special Learning Management System in combination with hardware components which correspond to a fully equipped laboratory workstation that are lent out to the students for the duration of the lab. The experiments are performed and solved at home, then handed in electronically. Judging and marking are also both performed electronically. Since 2003 the Mobile Hardware Lab is now offered in a completely web based form.

  3. Instrument hardware and software upgrades at IPNS

    International Nuclear Information System (INIS)

    Worlton, Thomas; Hammonds, John; Mikkelson, D.; Mikkelson, Ruth; Porter, Rodney; Tao, Julian; Chatterjee, Alok

    2006-01-01

    IPNS is in the process of upgrading their time-of-flight neutron scattering instruments with improved hardware and software. The hardware upgrades include replacing old VAX Qbus and Multibus-based data acquisition systems with new systems based on VXI and VME. Hardware upgrades also include expanded detector banks and new detector electronics. Old VAX Fortran-based data acquisition and analysis software is being replaced with new software as part of the ISAW project. ISAW is written in Java for ease of development and portability, and is now used routinely for data visualization, reduction, and analysis on all upgraded instruments. ISAW provides the ability to process and visualize the data from thousands of detector pixels, each having thousands of time channels. These operations can be done interactively through a familiar graphical user interface or automatically through simple scripts. Scripts and operators provided by end users are automatically included in the ISAW menu structure, along with those distributed with ISAW, when the application is started

  4. Sistema Computacional para Ajuste de Funções Densidade de Probabilidade

    Directory of Open Access Journals (Sweden)

    Daniel Henrique Breda Binoti

    Full Text Available RESUMO Este trabalho teve por objetivo iniciar, implementar e validar um projeto de construção de um sistema computadorizado para ajuste de funções densidade de probabilidade. O FitFD foi desenvolvido utilizando-se a linguagem de programação Java. Como ambiente de desenvolvimento foram utilizadas a IDE (Integrated Development Environment Netbeans 7.1 e a JDK 7.3 (Java Development Kit. Os testes do sistema foram realizados em ambiente Windows. Foram implementadas no sistema as seguintes funções densidade de probabilidade: Weibull (2P, 3P, 2P com dap mínimo como locação, 3P truncada, hiperbólica (2P, 3P, 2P com dap mínimo como locação, 3P truncada, log-logística (2P, 3P, 2P com dap mínimo como locação, logística generalizada, Fatigue life (2P e 3P e Frechet (2P e 3P. O sistema desenvolvido auxilia os usuários na definição e escolha da fdp que melhor atenda suas necessidades, contudo melhorias são necessárias. O projeto iniciado mostrou-se eficiente para ajustes de funções de densidade probabilidade.

  5. 2018 NA62 Status Report to the CERN SPSC

    CERN Document Server

    NA62, Collaboration

    2018-01-01

    The status of the NA62 experiment is reported. The ongoing activities on detectors and hardware are summarised and the status of the data processing is reviewed. The result from the "K^{+}\\rightarrow\\pi^{+}\

  6. Efectos de la simulación computacional en la comprensión de la distribución binomial y la distribución de proporciones

    OpenAIRE

    Martínez, Johanna; Yáñez, Gabriel

    2014-01-01

    En este trabajo se presenta un proyecto de investigación que se basa en el enfoque instrumental para describir el efecto que tiene la simulación computacional en la comprensión de la distribución binomial y la distribución de proporciones.

  7. Hardware implementation of a GFSR pseudo-random number generator

    Science.gov (United States)

    Aiello, G. R.; Budinich, M.; Milotti, E.

    1989-12-01

    We describe the hardware implementation of a pseudo-random number generator of the "Generalized Feedback Shift Register" (GFSR) type. After brief theoretical considerations we describe two versions of the hardware, the tests done and the performances achieved.

  8. Open Source Hardware for DIY Environmental Sensing

    Science.gov (United States)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  9. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.

  10. Computer hardware for radiologists: Part 2

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future

  11. The Impact of Flight Hardware Scavenging on Space Logistics

    Science.gov (United States)

    Oeftering, Richard C.

    2011-01-01

    For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.

  12. Management of cladding hulls and fuel hardware

    International Nuclear Information System (INIS)

    1985-01-01

    The reprocessing of spent fuel from power reactors based on chop-leach technology produces a solid waste product of cladding hulls and other metallic residues. This report describes the current situation in the management of fuel cladding hulls and hardware. Information is presented on the material composition of such waste together with the heating effects due to neutron-induced activation products and fuel contamination. As no country has established a final disposal route and the corresponding repository, this report also discusses possible disposal routes and various disposal options under consideration at present

  13. Open Hardware for CERN's accelerator control systems

    International Nuclear Information System (INIS)

    Bij, E van der; Serrano, J; Wlostowski, T; Cattin, M; Gousiou, E; Sanchez, P Alvarez; Boccardi, A; Voumard, N; Penacoba, G

    2012-01-01

    The accelerator control systems at CERN will be upgraded and many electronics modules such as analog and digital I/O, level converters and repeaters, serial links and timing modules are being redesigned. The new developments are based on the FPGA Mezzanine Card, PCI Express and VME64x standards while the Wishbone specification is used as a system on a chip bus. To attract partners, the projects are developed in an 'Open' fashion. Within this Open Hardware project new ways of working with industry are being evaluated and it has been proven that industry can be involved at all stages, from design to production and support.

  14. Hardware for computing the integral image

    OpenAIRE

    Fernández-Berni, J.; Rodríguez-Vázquez, Ángel; Río, Rocío del; Carmona-Galán, R.

    2015-01-01

    La presente invención, según se expresa en el enunciado de esta memoria descriptiva, consiste en hardware de señal mixta para cómputo de la imagen integral en el plano focal mediante una agrupación de celdas básicas de sensado-procesamiento cuya interconexión puede ser reconfigurada mediante circuitería periférica que hace posible una implementación muy eficiente de una tarea de procesamiento muy útil en visión artificial como es el cálculo de la imagen integral en escenarios tales como monit...

  15. Development of Hardware Dual Modality Tomography System

    Directory of Open Access Journals (Sweden)

    R. M. Zain

    2009-06-01

    Full Text Available The paper describes the hardware development and performance of the Dual Modality Tomography (DMT system. DMT consists of optical and capacitance sensors. The optical sensors consist of 16 LEDs and 16 photodiodes. The Electrical Capacitance Tomography (ECT electrode design use eight electrode plates as the detecting sensor. The digital timing and the control unit have been developing in order to control the light projection of optical emitters, switching the capacitance electrodes and to synchronize the operation of data acquisition. As a result, the developed system is able to provide a maximum 529 set data per second received from the signal conditioning circuit to the computer.

  16. Fast Gridding on Commodity Graphics Hardware

    DEFF Research Database (Denmark)

    Sørensen, Thomas Sangild; Schaeffter, Tobias; Noe, Karsten Østergaard

    2007-01-01

    is the far most time consuming of the three steps (Table 1). Modern graphics cards (GPUs) can be utilised as a fast parallel processor provided that algorithms are reformulated in a parallel solution. The purpose of this work is to test the hypothesis, that a non-cartesian reconstruction can be efficiently...... implemented on graphics hardware giving a significant speedup compared to CPU based alternatives. We present a novel GPU implementation of the convolution step that overcomes the problems of memory bandwidth that has limited the speed of previous GPU gridding algorithms [2]....

  17. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  18. List search hardware for interpretive software

    CERN Document Server

    Altaber, Jacques; Mears, B; Rausch, R

    1979-01-01

    Interpreted languages, e.g. BASIC, are simple to learn, easy to use, quick to modify and in general 'user-friendly'. However, a critically time consuming process during interpretation is that of list searching. A special microprogrammed device for fast list searching has therefore been developed at the SPS Division of CERN. It uses bit- sliced hardware. Fast algorithms perform search, insert and delete of a six-character name and its value in a list of up to 1000 pairs. The prototype shows retrieval times of the order of 10-30 microseconds. (11 refs).

  19. Hardware trigger processor for the MDT system

    CERN Document Server

    AUTHOR|(SzGeCERN)757787; The ATLAS collaboration; Hazen, Eric; Butler, John; Black, Kevin; Gastler, Daniel Edward; Ntekas, Konstantinos; Taffard, Anyes; Martinez Outschoorn, Verena; Ishino, Masaya; Okumura, Yasuyuki

    2017-01-01

    We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.

  20. Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices

    Directory of Open Access Journals (Sweden)

    Ikbel Belaid

    2011-01-01

    Full Text Available Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.

  1. Is Hardware Removal Recommended after Ankle Fracture Repair?

    Directory of Open Access Journals (Sweden)

    Hong-Geun Jung

    2016-01-01

    Full Text Available The indications and clinical necessity for routine hardware removal after treating ankle or distal tibia fracture with open reduction and internal fixation are disputed even when hardware-related pain is insignificant. Thus, we determined the clinical effects of routine hardware removal irrespective of the degree of hardware-related pain, especially in the perspective of patients’ daily activities. This study was conducted on 80 consecutive cases (78 patients treated by surgery and hardware removal after bony union. There were 56 ankle and 24 distal tibia fractures. The hardware-related pain, ankle joint stiffness, discomfort on ambulation, and patient satisfaction were evaluated before and at least 6 months after hardware removal. Pain score before hardware removal was 3.4 (range 0 to 6 and decreased to 1.3 (range 0 to 6 after removal. 58 (72.5% patients experienced improved ankle stiffness and 65 (81.3% less discomfort while walking on uneven ground and 63 (80.8% patients were satisfied with hardware removal. These results suggest that routine hardware removal after ankle or distal tibia fracture could ameliorate hardware-related pain and improves daily activities and patient satisfaction even when the hardware-related pain is minimal.

  2. ISS Logistics Hardware Disposition and Metrics Validation

    Science.gov (United States)

    Rogers, Toneka R.

    2010-01-01

    I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.

  3. CASIS Fact Sheet: Hardware and Facilities

    Science.gov (United States)

    Solomon, Michael R.; Romero, Vergel

    2016-01-01

    Vencore is a proven information solutions, engineering, and analytics company that helps our customers solve their most complex challenges. For more than 40 years, we have designed, developed and delivered mission-critical solutions as our customers' trusted partner. The Engineering Services Contract, or ESC, provides engineering and design services to the NASA organizations engaged in development of new technologies at the Kennedy Space Center. Vencore is the ESC prime contractor, with teammates that include Stinger Ghaffarian Technologies, Sierra Lobo, Nelson Engineering, EASi, and Craig Technologies. The Vencore team designs and develops systems and equipment to be used for the processing of space launch vehicles, spacecraft, and payloads. We perform flight systems engineering for spaceflight hardware and software; develop technologies that serve NASA's mission requirements and operations needs for the future. Our Flight Payload Support (FPS) team at Kennedy Space Center (KSC) provides engineering, development, and certification services as well as payload integration and management services to NASA and commercial customers. Our main objective is to assist principal investigators (PIs) integrate their science experiments into payload hardware for research aboard the International Space Station (ISS), commercial spacecraft, suborbital vehicles, parabolic flight aircrafts, and ground-based studies. Vencore's FPS team is AS9100 certified and a recognized implementation partner for the Center for Advancement of Science in Space (CASIS

  4. ARM assembly language with hardware experiments

    CERN Document Server

    Elahi, Ata

    2015-01-01

    This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. ·         Introduces number systems and signal transmission methods   ·         Reviews logic gates, registers, multiplexers, decoders and memory   ·         Provides an overview and examples of ARM instruction set   ·         Uses using Keil development tools for writing and debugging ARM assembly language Programs   ·         Hardware experiments using a Mbed NXP LPC1768 microcontroller; including General Purpose Input/Output (GPIO) configuration, real time clock configuration, binary input to 7-segment display, creating ...

  5. Introduction to Hardware Security and Trust

    CERN Document Server

    Wang, Cliff

    2012-01-01

    The emergence of a globalized, horizontal semiconductor business model raises a set of concerns involving the security and trust of the information systems on which modern society is increasingly reliant for mission-critical functionality. Hardware-oriented security and trust issues span a broad range including threats related to the malicious insertion of Trojan circuits designed, e.g.,to act as a ‘kill switch’ to disable a chip, to integrated circuit (IC) piracy,and to attacks designed to extract encryption keys and IP from a chip. This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of,and trust in, modern society�...

  6. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  7. Handbook of hardware/software codesign

    CERN Document Server

    Teich, Jürgen

    2017-01-01

    This handbook presents fundamental knowledge on the hardware/software (HW/SW) codesign methodology. Contributing expert authors look at key techniques in the design flow as well as selected codesign tools and design environments, building on basic knowledge to consider the latest techniques. The book enables readers to gain real benefits from the HW/SW codesign methodology through explanations and case studies which demonstrate its usefulness. Readers are invited to follow the progress of design techniques through this work, which assists readers in following current research directions and learning about state-of-the-art techniques. Students and researchers will appreciate the wide spectrum of subjects that belong to the design methodology from this handbook. .

  8. Battery Management System Hardware Concepts: An Overview

    Directory of Open Access Journals (Sweden)

    Markus Lelie

    2018-03-01

    Full Text Available This paper focuses on the hardware aspects of battery management systems (BMS for electric vehicle and stationary applications. The purpose is giving an overview on existing concepts in state-of-the-art systems and enabling the reader to estimate what has to be considered when designing a BMS for a given application. After a short analysis of general requirements, several possible topologies for battery packs and their consequences for the BMS’ complexity are examined. Four battery packs that were taken from commercially available electric vehicles are shown as examples. Later, implementation aspects regarding measurement of needed physical variables (voltage, current, temperature, etc. are discussed, as well as balancing issues and strategies. Finally, safety considerations and reliability aspects are investigated.

  9. EPICS: Allen-Bradley hardware reference manual

    International Nuclear Information System (INIS)

    Nawrocki, G.

    1993-01-01

    This manual covers the following hardware: Allen-Bradley 6008 -- SV VMEbus I/O scanner; Allen-Bradley universal I/O chassis 1771-A1B, -A2B, -A3B, and -A4B; Allen-Bradley power supply module 1771-P4S; Allen-Bradley 1771-ASB remote I/O adapter module; Allen-Bradley 1771-IFE analog input module; Allen-Bradley 1771-OFE analog output module; Allen-Bradley 1771-IG(D) TTL input module; Allen-Bradley 1771-OG(d) TTL output; Allen-Bradley 1771-IQ DC selectable input module; Allen-Bradley 1771-OW contact output module; Allen-Bradley 1771-IBD DC (10--30V) input module; Allen-Bradley 1771-OBD DC (10--60V) output module; Allen-Bradley 1771-IXE thermocouple/millivolt input module; and the Allen-Bradley 2705 RediPANEL push button module

  10. Locating hardware faults in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  11. Theorem Proving in Intel Hardware Design

    Science.gov (United States)

    O'Leary, John

    2009-01-01

    For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.

  12. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  13. Communication Estimation for Hardware/Software Codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    This paper presents a general high level estimation model of communication throughput for the implementation of a given communication protocol. The model, which is part of a larger model that includes component price, software driver object code size and hardware driver area, is intended...... to be general enough to be able to capture the characteristics of a wide range of communication protocols and yet to be sufficiently detailed as to allow the designer or design tool to efficiently explore tradeoffs between throughput, bus widths, burst/non-burst transfers and data packing strategies. Thus...... it provides a basis for decision making with respect to communication protocols/components and communication driver design in the initial design space exploration phase of a co-synthesis process where a large number of possibilities must be examined and where fast estimators are therefore necessary. The fill...

  14. The double Chooz hardware trigger system

    Energy Technology Data Exchange (ETDEWEB)

    Cucoanes, Andi; Beissel, Franz; Reinhold, Bernd; Roth, Stefan; Stahl, Achim; Wiebusch, Christopher [RWTH Aachen (Germany)

    2008-07-01

    The double Chooz neutrino experiment aims to improve the present knowledge on {theta}{sub 13} mixing angle using two similar detectors placed at {proportional_to}280 m and respectively 1 km from the Chooz power plant reactor cores. The detectors measure the disappearance of reactor antineutrinos. The hardware trigger has to be very efficient for antineutrinos as well as for various types of background events. The triggering condition is based on discriminated PMT sum signals and the multiplicity of groups of PMTs. The talk gives an outlook to the double Chooz experiment and explains the requirements of the trigger system. The resulting concept and its performance is shown as well as first results from a prototype system.

  15. Caracterização computacional de padrões estruturais em seqüências de DNA relacionadas a processos em redes metabólicas

    OpenAIRE

    Laurita dos Santos

    2009-01-01

    Nas últimas décadas, uma enorme quantidade de informação sobre o funcionamento de sistemas biológicos foram disponibilizadas em bancos de dados de acesso público. A Computação Aplicada à Biologia ou Bioinformática tem contribuído para análise computacional de dados biológicos cada vez mais ricos em informação. Neste contexto, este trabalho tem por objetivo analisar e caracterizar a estrutura do Ácido Nucléico DNA através de técnicas matemáticas e computacionais. As técnicas de caracterização ...

  16. Utilização de código aberto de dinâmica de fluidos computacional para estudo de placas de orifício

    OpenAIRE

    Thiago Teixeira Kunz

    2014-01-01

    Este trabalho apresenta simulações numéricas de escoamento de fluidos através de placas de orifício, elementos primários de medição de vazão, em comparação aos resultados esperados por normas internacionais. O coeficiente de descarga usado para a determinação da vazão de um escoamento em uma tubulação foi obtido numericamente através da aplicação do modelo de turbulência de baixo Reynolds proposto por Launder-Sharma, resolvido através de um código aberto de Dinâmica de Fluidos Computacional. ...

  17. Sistematização do dimensionamento técnico e econômico de sistemas fotovoltaicos isolados por meio de programa computacional Systematization of the technical and economic sizing of isolated photovoltaic systems through specific software

    Directory of Open Access Journals (Sweden)

    José A. Marini

    2005-04-01

    Full Text Available Uma das principais questões referentes à energia solar é como compará-la, técnica e economicamente, com outras fontes de energia, tanto alternativas quanto com as convencionais (como a rede elétrica. O propósito deste trabalho foi desenvolver um programa computacional que reúne os principais dados técnicos e econômicos, para identificar, por meio de métodos de análise microeconômica, a viabilidade comercial no dimensionamento de sistemas fotovoltaicos, além de considerar os benefícios provenientes da própria geração energética. Na análise microeconômica da energia solar, foram identificados os custos da energia proveniente da rede convencional durante o período de vida útil dos componentes do sistema gerador de eletricidade fotovoltaica, pelo estudo dos custos de investimentos iniciais e manutenção do sistema. Para comparação com as fontes convencionais - rede elétrica e grupo diesel - foram usados três cenários de custos de painéis fotovoltaicos e dois para o fator de disponibilidade do grupo diesel. Pelos resultados, verifica-se que, quanto mais baixo o custo dos painéis e mais distante o local situar-se da rede elétrica, o sistema fotovoltaico torna-se a opção mais vantajosa.One of the main referring subjects to the solar energy is how to compare it economically with other sources of energy, as much alternatives as with conventionals (like the electric grid. The purpose of this work was to develop a software which congregates the technical and economic main data to identify, through methods of microeconomic analysis, the commercial viability in the sizing of photovoltaic systems, besides considering the benefits proceeding from the proper energy generation. Considering the period of useful life of the components of the generation system of photovoltaic electricity, the costs of the energy proceeding from the conventional grid had been identified. For the comparison of the conventional sources, electric grid

  18. Improvement in hippocampal kindling analysis through computational processing data Aprimorando a análise do modelo de kindling hipocampal com o auxílio de processamento computacional

    Directory of Open Access Journals (Sweden)

    Joacir Graciolli Cordeiro

    2009-09-01

    Full Text Available The kindling phenomenon is classically investigated in epileptology research. The present study aims to provide further information about hippocampal kindling through computational processing data. Adult Wistar rats were implanted with dorsal hippocampal and frontal neocortical electrodes to perform the experiment. The processing data was obtained using the Spike2 and Matlab softwares. An inverse relationship between the number of "wet dog shakes" and the Racine's motor stages development was found. Moreover it was observed a significant increase in the afterdischarge (AD duration and its frequency content. The highest frequencies were, however, only reached at the beginning of behavioral seizures. During the primary AD, fast transients (ripples were registered in both hippocampi superimposed to slower waves. This experiment highlights the usefulness of computational processing applied to animal models of temporal lobe epilepsy and supports a relevant role of the high frequency discharges in temporal epileptogenesis.O fenômeno de kindling é classicamente utilizado no campo da epileptologia experimental. Este trabalho objetiva aprofundar a análise do modelo kindling hipocampal através de processamento computacional. Ratos wistar adultos receberam eletrodos hipocampais dorsais e neocorticais frontais para a realização do experimento. O processamento dos dados encontrados foi realizado pelos softwares Matlab e Spike2. Encontrou-se uma relação inversa entre wet dog shakes e o desenvolvimento dos estágios motores de Racine. A duração e o conteúdo de freqüência das pós-descargas hipocampais aumentaram durante o processo, sendo observadas descargas de alta freqüência (ripples em ambos os hipocampos durante as pós-descargas primárias, superimpostas a ondas lentas. As mais altas freqüências, entretanto, foram apenas atingidas com o início das crises epilépticas. A utilização de sistemas computacionais para a confecção e an

  19. SIMULACIÓN COMPUTACIONAL DE UN SISTEMA FRIGORÍFICO Y ANÁLISIS DE SUSTITUCIÓN DE REFRIGERANTES NOCIVOS A LA CAPA DE OZONO

    Directory of Open Access Journals (Sweden)

    Boris Henry Rocha Mercado

    2005-01-01

    Full Text Available En el presente trabajo se desarrolla un modelo matemático para un sistema frigorífico en régimen permanente y una simulación computacional de su desempeño térmico. El sistema estudiado fue diseñado para trabajar con R-12 como fluido refrigerante y considera entre sus componentes un compresor hermético, condensador y evaporador de tubos y aletas, un tubo capilar, un separador de líquido y un filtro deshidratador. Los modelos matemáticos de los componentes del sistema fueron desarrollados considerando especificaciones técnicas de los fabricantes y correlaciones disponibles en la literatura. En el compresor de desplazamiento fijo se admite la presencia de un proceso de compresión politrópica, en el condensador y evaporador fueron consideradas las regiones monofásicas y bifásicas que define el fluido refrigerante a su paso por estos componentes y en el tubo capilar la variación de densidad y presión a lo largo de su longitud. La solución del sistema de ecuaciones resultante de los distintos modelos matemáticos, fue obtenida mediante el método de sustituciones sucesivas. Este modelo de simulación computacional fue utilizado para el análisis del desempeño térmico del Retrofit, donde se verifica una disminución de 6% en el COP por la substitución de R-12 por R-134a.

  20. Framework to development of applications to monitoring and analysis petroleum processes; Ambiente computacional para desenvolvimento de aplicacoes de monitoramento e analise de processos na industria do petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Guedes, Luiz Affonso; Bezerra, Clauber; Feijo, Rafael; Eidelwein, Maria Emilia; Cunha, Dannilo Martins; Costa, Bruno [Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN (Brazil). Dept. de Engenharia de Computacao e Automacao; Souza, Alessandro [Centro Federal de Educacao Tecnologica do Rio Grande do Norte (CEFET/RN), Natal, RN (Brazil)

    2008-07-01

    This paper presents a system for monitoring and visualization of plant process. The System is able to collects data from several data servers, like OPC Servers, SCADA systems and OLE DB data bases. The collected data are displayed to users using a fast human machine interface. The system has a library of software components which can be easily configured and used by users, allowing them to develop on demand their own applications. The system has other features that allow uses to easily share their applications to other users. In addition, users can access them from any computer connected to the company intranet. (author)

  1. Modelagem computacional e análise da salinização dos aqüíferos na área central de Recife

    OpenAIRE

    Luiz Ribeiro de Paiva, Anderson

    2004-01-01

    As águas subterrâneas constituem importante fonte de água para exploração, efetiva ou como reserva estratégica, regional e local, necessitando de conservação. Porém, a qualidade das águas subterrâneas é susceptível de ser afetada pelas atividades sócio-econômicas, designadamente pelos usos e ocupações do solo, provocando contaminação das águas subterrâneas que está se tornando cada vez mais comum. Atualmente, um dos problemas de contaminação que mais tem ocorrido no mundo é ...

  2. Implementação de um sistema computacional, com técnicas de realidade virtual, para auxiliar na educação ambiental

    Directory of Open Access Journals (Sweden)

    C. N. Macedo

    2010-12-01

    Full Text Available A universidade está a buscar constantes avanços e inovações, conduzindo a um processo inovador de troca de informações rápidas e/ou instantâneas. Assim, lida-se em todos os momentos do cotidiano com a informação e a tecnologia. No ambiente escolar isto também se processa, conseqüentemente, a escola não pode ignorar todo o conhecimento acessível e disponível através da informática, que nos disponibiliza softwares educativos, internet, entre outros recursos. O artigo propõe o uso de técnicas de realidade virtual para fornecer conhecimentos básicos sobre os componentes de um sistema de geração fotovoltaica, de energia elétrica, em uma residência. No ambiente virtual, construído neste trabalho, através da linguagem de programação VRML, o usuário pode “navegar” pelos cômodos de uma casa virtual, identificando os componentes do sistema fotovoltaico e suas respectivas cargas.

  3. Hardware descriptions of the I and C systems for NPP

    International Nuclear Information System (INIS)

    Lee, Cheol Kwon; Oh, In Suk; Park, Joo Hyun; Kim, Dong Hoon; Han, Jae Bok; Shin, Jae Whal; Kim, Young Bak

    2003-09-01

    The hardware specifications for I and C Systems of SNPP(Standard Nuclear Power Plant) are reviewed in order to acquire the hardware requirement and specification of KNICS (Korea Nuclear Instrumentation and Control System). In the study, we investigated hardware requirements, hardware configuration, hardware specifications, man-machine hardware requirements, interface requirements with the other system, and data communication requirements that are applicable to SNP. We reviewed those things of control systems, protection systems, monitoring systems, information systems, and process instrumentation systems. Through the study, we described the requirements and specifications of digital systems focusing on a microprocessor and a communication interface, and repeated it for analog systems focusing on the manufacturing companies. It is expected that the experience acquired from this research will provide vital input for the development of the KNICS

  4. Expert System analysis of non-fuel assembly hardware and spent fuel disassembly hardware: Its generation and recommended disposal

    International Nuclear Information System (INIS)

    Williamson, D.A.

    1991-01-01

    Almost all of the effort being expended on radioactive waste disposal in the United States is being focused on the disposal of spent Nuclear Fuel, with little consideration for other areas that will have to be disposed of in the same facilities. one area of radioactive waste that has not been addressed adequately because it is considered a secondary part of the waste issue is the disposal of the various Non-Fuel Bearing Components of the reactor core. These hardware components fall somewhat arbitrarily into two categories: Non-Fuel Assembly (NFA) hardware and Spent Fuel Disassembly (SFD) hardware. This work provides a detailed examination of the generation and disposal of NFA hardware and SFD hardware by the nuclear utilities of the United States as it relates to the Civilian Radioactive Waste Management Program. All available sources of data on NFA and SFD hardware are analyzed with particular emphasis given to the Characteristics Data Base developed by Oak Ridge National Laboratory and the characterization work performed by Pacific Northwest Laboratories and Rochester Gas ampersand Electric. An Expert System developed as a portion of this work is used to assist in the prediction of quantities of NFA hardware and SFD hardware that will be generated by the United States' utilities. Finally, the hardware waste management practices of the United Kingdom, France, Germany, Sweden, and Japan are studied for possible application to the disposal of domestic hardware wastes. As a result of this work, a general classification scheme for NFA and SFD hardware was developed. Only NFA and SFD hardware constructed of zircaloy and experiencing a burnup of less than 70,000 MWD/MTIHM and PWR control rods constructed of stainless steel are considered Low-Level Waste. All other hardware is classified as Greater-ThanClass-C waste

  5. Why Open Source Hardware matters and why you should care

    OpenAIRE

    Gürkaynak, Frank K.

    2017-01-01

    Open source hardware is currently where open source software was about 30 years ago. The idea is well received by enthusiasts, there is interest and the open source hardware has gained visible momentum recently, with several well-known universities including UC Berkeley, Cambridge and ETH Zürich actively working on large projects involving open source hardware, attracting the attention of companies big and small. But it is still not quite there yet. In this talk, based on my experience on the...

  6. Support for NUMA hardware in HelenOS

    OpenAIRE

    Horký, Vojtěch

    2011-01-01

    The goal of this master thesis is to extend HelenOS operating system with the support for ccNUMA hardware. The text of the thesis contains a brief introduction to ccNUMA hardware, an overview of NUMA features and relevant features of HelenOS (memory management, scheduling, etc.). The thesis analyses various design decisions of the implementation of NUMA support -- introducing the hardware topology into the kernel data structures, propagating this information to user space, thread affinity to ...

  7. Reliable software for unreliable hardware a cross layer perspective

    CERN Document Server

    Rehman, Semeen; Henkel, Jörg

    2016-01-01

    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  8. Environmental Friendly Coatings and Corrosion Prevention For Flight Hardware Project

    Science.gov (United States)

    Calle, Luz

    2014-01-01

    Identify, test and develop qualification criteria for environmentally friendly corrosion protective coatings and corrosion preventative compounds (CPC's) for flight hardware an ground support equipment.

  9. Open Hardware For CERN's Accelerator Control Systems

    CERN Document Server

    van der Bij, E; Ayass, M; Boccardi, A; Cattin, M; Gil Soriano, C; Gousiou, E; Iglesias Gonsálvez, S; Penacoba Fernandez, G; Serrano, J; Voumard, N; Wlostowski, T

    2011-01-01

    The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its ...

  10. Magnetic qubits as hardware for quantum computers

    International Nuclear Information System (INIS)

    Tejada, J.; Chudnovsky, E.; Barco, E. del

    2000-01-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S z = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S z = ± S. In each case the temperature of operation must be low compared to the energy gap, Δ, between the states vertical bar-0> and vertical bar-1>. The gap Δ in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  11. Magnetic qubits as hardware for quantum computers

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, J.; Chudnovsky, E.; Barco, E. del [and others

    2000-07-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S{sub z} = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S{sub z} = {+-} S. In each case the temperature of operation must be low compared to the energy gap, {delta}, between the states vertical bar-0> and vertical bar-1>. The gap {delta} in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  12. Nanorobot Hardware Architecture for Medical Defense

    Directory of Open Access Journals (Sweden)

    Luiz C. Kretly

    2008-05-01

    Full Text Available This work presents a new approach with details on the integrated platform and hardware architecture for nanorobots application in epidemic control, which should enable real time in vivo prognosis of biohazard infection. The recent developments in the field of nanoelectronics, with transducers progressively shrinking down to smaller sizes through nanotechnology and carbon nanotubes, are expected to result in innovative biomedical instrumentation possibilities, with new therapies and efficient diagnosis methodologies. The use of integrated systems, smart biosensors, and programmable nanodevices are advancing nanoelectronics, enabling the progressive research and development of molecular machines. It should provide high precision pervasive biomedical monitoring with real time data transmission. The use of nanobioelectronics as embedded systems is the natural pathway towards manufacturing methodology to achieve nanorobot applications out of laboratories sooner as possible. To demonstrate the practical application of medical nanorobotics, a 3D simulation based on clinical data addresses how to integrate communication with nanorobots using RFID, mobile phones, and satellites, applied to long distance ubiquitous surveillance and health monitoring for troops in conflict zones. Therefore, the current model can also be used to prevent and save a population against the case of some targeted epidemic disease.

  13. Hardware upgrade for A2 data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Ostrick, Michael; Gradl, Wolfgang; Otte, Peter-Bernd; Neiser, Andreas; Steffen, Oliver; Wolfes, Martin; Koerner, Tito [Institut fuer Kernphysik, Mainz (Germany); Collaboration: A2-Collaboration

    2014-07-01

    The A2 Collaboration uses an energy tagged photon beam which is produced via bremsstrahlung off the MAMI electron beam. The detector system consists of Crystal Ball and TAPS and covers almost the whole solid angle. A frozen-spin polarized target allows to perform high precision measurements of polarization observables in meson photo-production. During the last summer, a major upgrade of the data acquisition system was performed, both on the hardware and the software side. The goal of this upgrade was increased reliability of the system and an improvement in the data rate to disk. By doubling the number of readout CPUs and employing special VME crates with a split backplane, the number of bus accesses per readout cycle and crate was cut by a factor of two, giving almost a factor of two gain in the readout rate. In the course of the upgrade, we also switched most of the detector control system to using the distributed control system EPICS. For the upgraded control system, some new tools were developed to make full use of the capabilities of this decentralised slow control and monitoring system. The poster presents some of the major contributions to this project.

  14. Avaliação do comportamento hidrodinâmico de reator anaeróbio de manta de lodo e fluxo ascendente com diferentes configurações do sistema de distribuição do afluente utilizando fluidodinâmica computacional

    Directory of Open Access Journals (Sweden)

    Diego Bongiorno Cruz

    Full Text Available RESUMO Compreender o comportamento hidrodinâmico de reatores biológicos pode auxiliar na detecção de problemas associados a falhas operacionais e de projeto, situações que prejudicam a eficiência do tratamento. Neste artigo, realizaram-se simulações da fluidodinâmica computacional (CFD de escoamento de duas fases sólida-líquida de um reator anaeróbio de manta de lodo e fluxo ascendente (UASB, em escala piloto (160 L, com tempo de detenção hidráulica (TDH de 10 h e vazão de 16 L.h-1. Um modelo Euler-Euler simplificado foi formulado para simular o comportamento hidrodinâmico da zona de reação, influenciada pela configuração do sistema de distribuição do afluente. Foram avaliadas quatro configurações do sistema de distribuição do afluente no reator: uma entrada na parte central (1 e duas entradas centrais (2, de fluxo ascendente; duas entradas nas laterais (3, de fluxo radial; e três entradas de fluxo descendente (4, utilizando geometrias bidimensionais e tridimensionais para verificar a formação de zonas mortas, curtos-circuitos hidráulicos e caminhos preferenciais. As melhores características hidrodinâmicas e a melhor distribuição do afluente foram verificadas na configuração 4, com melhor perfil de mistura do lodo com a fase líquida, na comparação com as demais configurações. Foi notada formação de vórtices na parte inferior do reator com maior concentração do lodo anaeróbio nessa configuração e de caminhos preferenciais nas laterais do reator na configuração 3, indicando mistura ineficiente do afluente com o lodo anaeróbio. O modelo demonstrou que a configuração do sistema de distribuição do afluente influencia significativamente o comportamento hidrodinâmico do reator UASB.

  15. FPGA BASED HARDWARE KEY FOR TEMPORAL ENCRYPTION

    Directory of Open Access Journals (Sweden)

    B. Lakshmi

    2010-09-01

    Full Text Available In this paper, a novel encryption scheme with time based key technique on an FPGA is presented. Time based key technique ensures right key to be entered at right time and hence, vulnerability of encryption through brute force attack is eliminated. Presently available encryption systems, suffer from Brute force attack and in such a case, the time taken for breaking a code depends on the system used for cryptanalysis. The proposed scheme provides an effective method in which the time is taken as the second dimension of the key so that the same system can defend against brute force attack more vigorously. In the proposed scheme, the key is rotated continuously and four bits are drawn from the key with their concatenated value representing the delay the system has to wait. This forms the time based key concept. Also the key based function selection from a pool of functions enhances the confusion and diffusion to defend against linear and differential attacks while the time factor inclusion makes the brute force attack nearly impossible. In the proposed scheme, the key scheduler is implemented on FPGA that generates the right key at right time intervals which is then connected to a NIOS – II processor (a virtual microcontroller which is brought out from Altera FPGA that communicates with the keys to the personal computer through JTAG (Joint Test Action Group communication and the computer is used to perform encryption (or decryption. In this case the FPGA serves as hardware key (dongle for data encryption (or decryption.

  16. Bayesian Estimation and Inference using Stochastic Hardware

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2016-03-01

    Full Text Available In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker, demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND, we show how inference can be performed in a Directed Acyclic Graph (DAG using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  17. Sharing open hardware through ROP, the robotic open platform

    NARCIS (Netherlands)

    Lunenburg, J.; Soetens, R.P.T.; Schoenmakers, F.; Metsemakers, P.M.G.; van de Molengraft, M.J.G.; Steinbuch, M.; Behnke, S.; Veloso, M.; Visser, A.; Xiong, R.

    2014-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  18. Sharing open hardware through ROP, the Robotic Open Platform

    NARCIS (Netherlands)

    Lunenburg, J.J.M.; Soetens, R.P.T.; Schoenmakers, Ferry; Metsemakers, P.M.G.; Molengraft, van de M.J.G.; Steinbuch, M.

    2013-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  19. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This study explores the importance of the 'hardware' factors of the visual system in the game of rugby. A group of professional and club rugby players were tested and the results compared. The results were also compared with the established norms for elite athletes. The findings indicate no significant difference in hardware ...

  20. Hardware packet pacing using a DMA in a parallel computer

    Science.gov (United States)

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  1. Hardware/software virtualization for the reconfigurable multicore platform.

    NARCIS (Netherlands)

    Ferger, M.; Al Kadi, M.; Hübner, M.; Koedam, M.L.P.J.; Sinha, S.S.; Goossens, K.G.W.; Marchesan Almeida, Gabriel; Rodrigo Azambuja, J.; Becker, Juergen

    2012-01-01

    This paper presents the Flex Tiles approach for the virtualization of hardware and software for a reconfigurable multicore architecture. The approach enables the virtualization of a dynamic tile-based hardware architecture consisting of processing tiles connected via a network-on-chip and a

  2. Flexible hardware design for RSA and Elliptic Curve Cryptosystems

    NARCIS (Netherlands)

    Batina, L.; Bruin - Muurling, G.; Örs, S.B.; Okamoto, T.

    2004-01-01

    This paper presents a scalable hardware implementation of both commonly used public key cryptosystems, RSA and Elliptic Curve Cryptosystem (ECC) on the same platform. The introduced hardware accelerator features a design which can be varied from very small (less than 20 Kgates) targeting wireless

  3. Hardware and software for image acquisition in nuclear medicine

    International Nuclear Information System (INIS)

    Fideles, E.L.; Vilar, G.; Silva, H.S.

    1992-01-01

    A system for image acquisition and processing in nuclear medicine is presented, including the hardware and software referring to acquisition. The hardware is consisted of an analog-digital conversion card, developed in wire-wape. Its function is digitate the analogic signs provided by gamma camera. The acquisitions are made in list or frame mode. (C.G.C.)

  4. Hardware Abstraction and Protocol Optimization for Coded Sensor Networks

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    The design of the communication protocols in wireless sensor networks (WSNs) often neglects several key characteristics of the sensor's hardware, while assuming that the number of transmitted bits is the dominating factor behind the system's energy consumption. A closer look at the hardware speci...

  5. A Practical Introduction to HardwareSoftware Codesign

    CERN Document Server

    Schaumont, Patrick R

    2013-01-01

    This textbook provides an introduction to embedded systems design, with emphasis on integration of custom hardware components with software. The key problem addressed in the book is the following: how can an embedded systems designer strike a balance between flexibility and efficiency? The book describes how combining hardware design with software design leads to a solution to this important computer engineering problem. The book covers four topics in hardware/software codesign: fundamentals, the design space of custom architectures, the hardware/software interface and application examples. The book comes with an associated design environment that helps the reader to perform experiments in hardware/software codesign. Each chapter also includes exercises and further reading suggestions. Improvements in this second edition include labs and examples using modern FPGA environments from Xilinx and Altera, which make the material applicable to a greater number of courses where these tools are already in use.  Mo...

  6. Koala: sistema para integração de métodos de predição e análise de estruturas de proteína

    OpenAIRE

    Alexandre Defelicibus

    2016-01-01

    A Biologia Computacional tem desenvolvido algoritmos aplicados a problemas relevantes da Biologia. Um desses problemas é a Protein Structure Prediction (PSP). Vários métodos têm sido desenvolvidos na literatura para lidar com esse problema. Porém a reprodução de resultados e a comparação dos mesmos não têm sido uma tarefa fácil. Nesse sentido, o Critical Assessment of protein Structure Prediction (CASP), busca entre seus objetivos, realizar tais comparações. Além disso, os sistemas desenvolvi...

  7. Hardware Development Process for Human Research Facility Applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. The source of hardware requirements is the science community and HRF program. The HRF Science Working Group, consisting of SCientists from various medical disciplines, defined a basic set of equipment with functional requirements. This established the performance requirements of the hardware. HRF program requirements focus on making the hardware safe and operational in a space environment. This includes structural, thermal, human factors, and material requirements. Science and HRF program requirements are defined in a hardware requirements document which includes verification methods. Once the hardware is fabricated, requirements are verified by inspection, test, analysis, or demonstration. All data is compiled and reviewed to certify the hardware for flight. Obviously, the basis for all hardware development activities is requirement definition. Full and complete requirement definition is ideal prior to initiating the hardware development. However, this is generally not the case, but the hardware team typically has functional inputs as a guide. The first step is for engineers to conduct market research based on the functional inputs provided by scientists. CommerCially available products are evaluated against the science requirements as

  8. Monitoring Particulate Matter with Commodity Hardware

    Science.gov (United States)

    Holstius, David

    Health effects attributed to outdoor fine particulate matter (PM 2.5) rank it among the risk factors with the highest health burdens in the world, annually accounting for over 3.2 million premature deaths and over 76 million lost disability-adjusted life years. Existing PM2.5 monitoring infrastructure cannot, however, be used to resolve variations in ambient PM2.5 concentrations with adequate spatial and temporal density, or with adequate coverage of human time-activity patterns, such that the needs of modern exposure science and control can be met. Small, inexpensive, and portable devices, relying on newly available off-the-shelf sensors, may facilitate the creation of PM2.5 datasets with improved resolution and coverage, especially if many such devices can be deployed concurrently with low system cost. Datasets generated with such technology could be used to overcome many important problems associated with exposure misclassification in air pollution epidemiology. Chapter 2 presents an epidemiological study of PM2.5 that used data from ambient monitoring stations in the Los Angeles basin to observe a decrease of 6.1 g (95% CI: 3.5, 8.7) in population mean birthweight following in utero exposure to the Southern California wildfires of 2003, but was otherwise limited by the sparsity of the empirical basis for exposure assessment. Chapter 3 demonstrates technical potential for remedying PM2.5 monitoring deficiencies, beginning with the generation of low-cost yet useful estimates of hourly and daily PM2.5 concentrations at a regulatory monitoring site. The context (an urban neighborhood proximate to a major goods-movement corridor) and the method (an off-the-shelf sensor costing approximately USD $10, combined with other low-cost, open-source, readily available hardware) were selected to have special significance among researchers and practitioners affiliated with contemporary communities of practice in public health and citizen science. As operationalized by

  9. Sistema computacional para índices de cárie dentária: banco de dados e análise estatística A computer software system for dental caries rates: data bases and statistical analysis

    Directory of Open Access Journals (Sweden)

    Maria Lucia M.M. Sundefeld

    1996-10-01

    Full Text Available Apresenta-se um sistema computacional, denominado ICADPLUS, desenvolvido para elaboração de banco de dados, tabulação de dados, cálculo do índice CPO e análise estatística para estimação de intervalos de confiança e comparação de resultados de duas populações.Tem como objetivo apresentar método simplificado para atender necessidades de serviços de saúde na área de odontologia processando fichas utilizadas por cirurgiões dentistas em levantamentos epidemiológicos de cárie dentária. A característica principal do sistema é a dispensa de profissional especializado na área de odontologia e computação, exigindo o conhecimento mínimo de digitação por parte do usuário, pois apresenta "menus" simples e claros como também relatórios padronizados, sem possibilidade de erro. Possui opções para fichas de CPO segundo Klein e Palmer, CPO proposto pela OMS, CPOS segundo Klein, Palmer e Knutson, e ceo. A validação do sistema foi feita por comparação com outros métodos, permitindo recomendar sua adoção.A computer software system developed ICADAPLUS, is presented in order to create and tabulate data bases, calculate the DMF rate, perform statistical comparison of two populations, and calculate confidence intervals. The system offers a simplified method for health services in the area of dentistry, using dental records to carry out epidemiological surveys of tooth decay. The system's main feature is that it does not require specialists either in the area of dentistry or computing, demanding of the user only basic data-entry typing skills, since it presents simple menus, and standardized reports, with no possibility of error.The system comprises four steps: Data-entry, Processing, Reports and Utilities. In Data-entry the regions, towns and institutions supplying the data are initially registered, once only. Each record receives a code number, and it is this code which is available to the user through a Function Key, by

  10. Evaluation of a computer model to simulate water table response to subirrigation Avaliação de um modelo computacional para simular a resposta do lençol freático à subirrigação

    Directory of Open Access Journals (Sweden)

    Jadir Aparecido Rosa

    2002-12-01

    Full Text Available The objective of this work was to evaluate the water flow computer model, WATABLE, using experimental field observations on water table management plots from a site located near Hastings, FL, USA. The experimental field had scale drainage systems with provisions for subirrigation with buried microirrigation and conventional seepage irrigation systems. Potato (Solanum tuberosum L. growing seasons from years 1996 and 1997 were used to simulate the hydrology of the area. Water table levels, precipitation, irrigation and runoff volumes were continuously monitored. The model simulated the water movement from a buried microirrigation line source and the response of the water table to irrigation, precipitation, evapotranspiration, and deep percolation. The model was calibrated and verified by comparing simulated results with experimental field observations. The model performed very well in simulating seasonal runoff, irrigation volumes, and water table levels during crop growth. The two-dimensional model can be used to investigate different irrigation strategies involving water table management control. Applications of the model include optimization of the water table depth for each growth stage, and duration, frequency, and rate of irrigation.O objetivo deste trabalho foi avaliar o modelo computacional WATABLE usando-se dados de campo obtidos em uma área experimental em manejo de lençol freático, localizada em Hastings, FL, EUA. Na área experimental, estavam instalados um sistema de drenagem e sistemas de irrigação por subsuperfície com irrigação localizada e por canais. Ciclos de cultivo de batata (Solanum tuberosum L., nos anos de 1996 e 1997, foram usados para a simulação da hidrologia da área. Profundidades do lençol freático, chuvas, irrigação e escorrimento superficial foram monitorados constantemente. O modelo simulou o movimento da água a partir de uma linha de irrigação localizada enterrada, e a resposta do nível do len

  11. Targeting multiple heterogeneous hardware platforms with OpenCL

    Science.gov (United States)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware

  12. Hardware Implementation of a Bilateral Subtraction Filter

    Science.gov (United States)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for

  13. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-06-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations in the dynamics of the system. Such defects are illuminated through a new technique of generalized post proceeding with very low hardware cost. The thesis further discusses two encryption algorithms designed and implemented as a block cipher and a stream cipher. The security of both systems is thoroughly analyzed and the performance is compared with other reported systems showing a superior results. Both systems are realized on Xilinx Vetrix-4 FPGA with a hardware and throughput performance surpassing known encryption systems.

  14. Dynamically-Loaded Hardware Libraries (HLL) Technology for Audio Applications

    DEFF Research Database (Denmark)

    Esposito, A.; Lomuscio, A.; Nunzio, L. Di

    2016-01-01

    In this work, we apply hardware acceleration to embedded systems running audio applications. We present a new framework, Dynamically-Loaded Hardware Libraries or HLL, to dynamically load hardware libraries on reconfigurable platforms (FPGAs). Provided a library of application-specific processors......, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accelerator. The proposed architecture provides excellent flexibility with respect to the different audio applications implemented, high quality audio, and an energy efficient solution....

  15. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  16. Hardware support for collecting performance counters directly to memory

    Science.gov (United States)

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element.

  17. Aspects of system modelling in Hardware/Software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows...... the importance of making a clear distinction between the model used for partitioning and the model used for evaluation It also illustrates the importance of having a realistic hardware model such that hardware sharing can be taken into account. Finally, the importance of integrating scheduling and allocation...

  18. Inteligência computacional aplicada à previsão de vencedores em partidas de tênis

    Directory of Open Access Journals (Sweden)

    Mateus de Araujo Fernandes

    2016-09-01

    Full Text Available A previsão de vencedores em partidas de tênis pode representar várias utilidades práticas, pois os resultados de uma rodada em um torneio determinam quais jogos ocorrerão na rodada seguinte, o que é valioso para a organização dos torneios e a mídia, auxiliando na alocação de jogos em quadras e horários mais propícios, permitindo previsões de público e audiência e até mesmo embasando ações de merchandising. Neste trabalho, são estudados alguns dos principais fatores de influência na previsibilidade de partidas e, a partir dessa análise, são propostas duas diferentes abordagens para cálculos de confiabilidade na vitória de cada um dos competidores antes do início de uma partida. A primeira baseia-se em um sistema de inferência Fuzzy, explorando sua capacidade de reprodução de conhecimento de um especialista diante de uma mescla de informações. A segunda emprega uma rede neural, com sua característica de extração de atributos por meio de exemplos. Ambos os preditores têm como entradas dados de desempenhos prévios dos jogadores, que, nesse caso, tentam captar suas performances de curto, médio e longo prazo, além de sua afinidade com os diferentes tipos de pisos. Os resultados obtidos são encorajadores, mostrando ganhos significativos em relação à simples comparação baseada no ranking de entradas da Associação de Tenistas Profissionais.

  19. Lingüística computacional y esteganografía lingüística. Distribuyendo información oculta con recursos mínimos

    Directory of Open Access Journals (Sweden)

    Muñoz Muñoz, Alfonso

    2013-04-01

    Full Text Available Computational linguistics and linguistic steganography could allow to design useful systems in the protection / privacy of digital communications and digital language watermarking. However, building these systems is not always possible provided a series of conditions are not met. This article investigates whether it is possible to design procedures to hide information in natural language using minimal linguistic and computational resources. An algorithm is proposed and implemented, arguing for the usefulness and security of such proposals.La lingüística computacional puede ser aprovechada junto a la ciencia de la esteganografía lingüística para diseñar sistemas útiles en la protección/privacidad de las comunicaciones digitales y en el marcado digital de textos. No obstante, para poder llevar a cabo tal tarea se requiere de una serie de condiciones que no siempre se dan. En este artículo se investiga si es posible diseñar procedimientos que permitan ocultar información en lenguaje natural utilizando la mínima cantidad de recursos tanto lingüísticos como computacionales. Se propone un algoritmo y se implementa, razonando posteriormente a favor de la utilidad y la seguridad de propuestas de este tipo.

  20. Otimização de entropia: implementação computacional dos princípios MaxEnt e MinxEnt

    Directory of Open Access Journals (Sweden)

    Mattos Rogério Silva de

    2002-01-01

    Full Text Available Os princípios de otimização de entropia MaxEnt de Jaynes (1957a,b e MinxEnt de Kullback (1959 encontram aplicações em várias áreas de investigação científica. Ambos envolvem a otimização condicionada de medidas de entropia que são funções intrinsecamente não-lineares de probabilidades. Como constituem problemas de programação não-linear, suas soluções demandam algoritmos de busca iterativa e, além disso, as condições de não-negatividade e de soma um para as probabilidades restringem de modo particular o espaço de soluções. O artigo apresenta em detalhe (com a ajuda de dois fluxogramas uma implementação computacional eficiente desses dois princípios no caso de restrições lineares com verificação prévia de existência de solução dos problemas de otimização. Os autores também disponibilizam rotinas de fácil uso desenvolvidas em linguagem MatLabâ .

  1. Sistema computacional de realidad aumentada para la solidificación del aprendizaje en la educación básica

    Directory of Open Access Journals (Sweden)

    Manuel Alexander Ponce Tubay

    2018-02-01

    Full Text Available El desarrollo de una herramienta computacional para la mejora de los diferentes procesos de aprendizaje implantando nuevas TIC con realidad aumenta, para de esta forma despertar un mayor interés e interacción de los alumnos, contribuyendo así con en el proceso de enseñanza aprendizaje en la educación de la Unidad Educativa. La investigación está orientada a mejorar cada uno de los aspectos necesarios para la enseñanza y aprendizaje, agilizar e innovar la manera de aprender con un software como lo es naturaleza aumentada. La Unidad Educativa Cesar Lucas cuenta con veinte docentes de los cuales cuatro están enfocados al área de ciencias naturales, cada uno de los docentes maneja estrategias o métodos de aprendizaje de acuerdo al contenido o al contexto que se desarrollan las clases. Esto permitió evidenciar que los procesos de enseñanza aprendizaje ayudan al docente guiar sus diferentes temas al estudiante, con la creación de una herramienta de realidad aumentada personalizada y avanzada estos procesos innovarían la manera de aprender diferente, aplicando esta tecnología y con mejores características para un mayor aprendizaje en tiempo real.

  2. Rotina computacional para a determinação da velocidade de sedimentação das partículas do solo em suspensão no escoamento superficial Computational routine for the determination of the sedimentation velocity of the soil particles in the drain

    Directory of Open Access Journals (Sweden)

    Luiz F. C. de Oliveira

    2005-04-01

    Full Text Available O presente trabalho teve como objetivo desenvolver uma rotina computacional para a determinação da velocidade de deposição de partículas em suspensão no escoamento superficial, verificar sua aplicação por intermédio de modelo de transporte de sedimentos e comparar os resultados obtidos com dados experimentais. Empregou-se na rotina o processo iterativo de Newton-Rapshon para a solução das equações empregadas na determinação da velocidade de deposição de partículas em suspensão no escoamento superficial, e na solução da equação do transporte de sedimentos empregou-se a técnica das diferenças finitas. Essas rotinas foram empregadas na implementação do modelo MTSES (Modelo para Transporte de Solutos no Solo e no Escoamento Superficial. As velocidades de queda das partículas obtidas pela rotina desenvolvida, em média, foram superestimadas, com erro relativo médio de 0,63%, o que possibilitou a utilização da rotina desenvolvida no MTSES. O modelo MTSES superestimou o total de sedimentos transportados pelo escoamento superficial para todas as intensidades de precipitação empregadas neste estudo, com variações porcentuais de 15,6 a 58,3%.The present work had as objective to develop a computational routine for the determination of the sedimentation velocity in the drain and to verify its application through a model of transport of sediments and to compare the results obtained with experimental data. It was used in the routine the iterative process of Newton-Rapshon for the solution of the equations applied in the determination of the sedimentation velocity in the drain, and on the solution of the transport of sediments equation was applied the technique of the finite differences. Those routines were used in the implementation of the model MTSES (Model for solute transport in the soil and in the drain. The sedimentation velocities obtained by the developed routine were overestimated, with a medium relative error of 0

  3. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Dominique Houzet

    2006-08-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  4. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Ouadjaout Salim

    2006-01-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  5. Hardware device to physical structure binding and authentication

    Science.gov (United States)

    Hamlet, Jason R.; Stein, David J.; Bauer, Todd M.

    2013-08-20

    Detection and deterrence of device tampering and subversion may be achieved by including a cryptographic fingerprint unit within a hardware device for authenticating a binding of the hardware device and a physical structure. The cryptographic fingerprint unit includes an internal physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generate an internal PUF value. Binding logic is coupled to receive the internal PUF value, as well as an external PUF value associated with the physical structure, and generates a binding PUF value, which represents the binding of the hardware device and the physical structure. The cryptographic fingerprint unit also includes a cryptographic unit that uses the binding PUF value to allow a challenger to authenticate the binding.

  6. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-01-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations

  7. Hardware Implementation Of Line Clipping A lgorithm By Using FPGA

    Directory of Open Access Journals (Sweden)

    Amar Dawod

    2013-04-01

    Full Text Available The computer graphics system performance is increasing faster than any other computing application. Algorithms for line clipping against convex polygons and lines have been studied for a long time and many research papers have been published so far. In spite of the latest graphical hardware development and significant increase of performance the clipping is still a bottleneck of any graphical system. So its implementation in hardware is essential for real time applications. In this paper clipping operation is discussed and a hardware implementation of the line clipping algorithm is presented and finally formulated and tested using Field Programmable Gate Arrays (FPGA. The designed hardware unit consists of two parts : the first is positional code generator unit and the second is the clipping unit. Finally it is worth mentioning that the  designed unit is capable of clipping (232524 line segments per second.       

  8. Performance comparison between ISCSI and other hardware and software solutions

    CERN Document Server

    Gug, M

    2003-01-01

    We report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions. It focuses on the performance that can be achieved by these systems and gives measured figures for different configurations. It is divided into two parts : iSCSI and other technologies and hardware and software RAID solutions. The first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network. It covers block access technologies (iSCSI, hyperSCSI, ENBD). Experimental figures are given for different numbers of clients and servers. The second part compares a system based on 3ware hardware RAID controllers, a system using linux software RAID and IDE cards and a system mixing both hardware RAID and software RAID. Performance measurements for reading and writing are given for different RAID levels.

  9. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-01-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally

  10. Improvement of hardware basic testing : Identification and development of a scripted automation tool that will support hardware basic testing

    OpenAIRE

    Rask, Ulf; Mannestig, Pontus

    2002-01-01

    In the ever-increasing development pace, circuits and hardware are no exception. Hardware designs grow and circuits gets more complex at the same time as the market pressure lowers the expected time-to-market. In this rush, verification methods often lag behind. Hardware manufacturers must be aware of the importance of total verification if they want to avoid quality flaws and broken deadlines which in the long run will lead to delayed time-to-market, bad publicity and a decreasing market sha...

  11. Basics of spectroscopic instruments. Hardware of NMR spectrometer

    International Nuclear Information System (INIS)

    Sato, Hajime

    2009-01-01

    NMR is a powerful tool for structure analysis of small molecules, natural products, biological macromolecules, synthesized polymers, samples from material science and so on. Magnetic Resonance Imaging (MRI) is applicable to plants and animals Because most of NMR experiments can be done by an automation mode, one can forget hardware of NMR spectrometers. It would be good to understand features and performance of NMR spectrometers. Here I present hardware of a modern NMR spectrometer which is fully equipped with digital technology. (author)

  12. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  13. Utilizing IXP1200 hardware and software for packet filtering

    OpenAIRE

    Lindholm, Jeffery L.

    2004-01-01

    As network processors have advanced in speed and efficiency they have become more and more complex in both hardware and software configurations. Intel's IXP1200 is one of these new network processors that has been given to different universities worldwide to conduct research on. The goal of this thesis is to take the first step in starting that research by providing a stable system that can provide a reliable platform for further research. This thesis introduces the fundamental hardware of In...

  14. Security challenges and opportunities in adaptive and reconfigurable hardware

    OpenAIRE

    Costan, Victor Marius; Devadas, Srinivas

    2011-01-01

    We present a novel approach to building hardware support for providing strong security guarantees for computations running in the cloud (shared hardware in massive data centers), while maintaining the high performance and low cost that make cloud computing attractive in the first place. We propose augmenting regular cloud servers with a Trusted Computation Base (TCB) that can securely perform high-performance computations. Our TCB achieves cost savings by spreading functionality across two pa...

  15. Review of Maxillofacial Hardware Complications and Indications for Salvage

    OpenAIRE

    Hernandez Rosa, Jonatan; Villanueva, Nathaniel L.; Sanati-Mehrizy, Paymon; Factor, Stephanie H.; Taub, Peter J.

    2015-01-01

    From 2002 to 2006, more than 117,000 facial fractures were recorded in the U.S. National Trauma Database. These fractures are commonly treated with open reduction and internal fixation. While in place, the hardware facilitates successful bony union. However, when postoperative complications occur, the plates may require removal before bony union. Indications for salvage versus removal of the maxillofacial hardware are not well defined. A literature review was performed to identify instances w...

  16. Testing Microgravity Flight Hardware Concepts on the NASA KC-135

    Science.gov (United States)

    Motil, Susan M.; Harrivel, Angela R.; Zimmerli, Gregory A.

    2001-01-01

    This paper provides an overview of utilizing the NASA KC-135 Reduced Gravity Aircraft for the Foam Optics and Mechanics (FOAM) microgravity flight project. The FOAM science requirements are summarized, and the KC-135 test-rig used to test hardware concepts designed to meet the requirements are described. Preliminary results regarding foam dispensing, foam/surface slip tests, and dynamic light scattering data are discussed in support of the flight hardware development for the FOAM experiment.

  17. Accelerator Technology: Injection and Extraction Related Hardware: Kickers and Septa

    CERN Document Server

    Barnes, M J; Mertens, V

    2013-01-01

    This document is part of Subvolume C 'Accelerators and Colliders' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the the Section '8.7 Injection and Extraction Related Hardware: Kickers and Septa' of the Chapter '8 Accelerator Technology' with the content: 8.7 Injection and Extraction Related Hardware: Kickers and Septa 8.7.1 Fast Pulsed Systems (Kickers) 8.7.2 Electrostatic and Magnetic Septa

  18. Learning Machines Implemented on Non-Deterministic Hardware

    OpenAIRE

    Gupta, Suyog; Sindhwani, Vikas; Gopalakrishnan, Kailash

    2014-01-01

    This paper highlights new opportunities for designing large-scale machine learning systems as a consequence of blurring traditional boundaries that have allowed algorithm designers and application-level practitioners to stay -- for the most part -- oblivious to the details of the underlying hardware-level implementations. The hardware/software co-design methodology advocated here hinges on the deployment of compute-intensive machine learning kernels onto compute platforms that trade-off deter...

  19. Hardware control system using modular software under RSX-11D

    International Nuclear Information System (INIS)

    Kittell, R.S.; Helland, J.A.

    1978-01-01

    A modular software system used to control extensive hardware is described. The development, operation, and experience with this software are discussed. Included are the methods employed to implement this system while taking advantage of the Real-Time features of RSX-11D. Comparisons are made between this system and an earlier nonmodular system. The controlled hardware includes magnet power supplies, stepping motors, DVM's, and multiplexors, and is interfaced through CAMAC. 4 figures

  20. MRI monitoring of focused ultrasound sonications near metallic hardware.

    Science.gov (United States)

    Weber, Hans; Ghanouni, Pejman; Pascal-Tenorio, Aurea; Pauly, Kim Butts; Hargreaves, Brian A

    2018-07-01

    To explore the temperature-induced signal change in two-dimensional multi-spectral imaging (2DMSI) for fast thermometry near metallic hardware to enable MR-guided focused ultrasound surgery (MRgFUS) in patients with implanted metallic hardware. 2DMSI was optimized for temperature sensitivity and applied to monitor focus ultrasound surgery (FUS) sonications near metallic hardware in phantoms and ex vivo porcine muscle tissue. Further, we evaluated its temperature sensitivity for in vivo muscle in patients without metallic hardware. In addition, we performed a comparison of temperature sensitivity between 2DMSI and conventional proton-resonance-frequency-shift (PRFS) thermometry at different distances from metal devices and different signal-to-noise ratios (SNR). 2DMSI thermometry enabled visualization of short ultrasound sonications near metallic hardware. Calibration using in vivo muscle yielded a constant temperature sensitivity for temperatures below 43 °C. For an off-resonance coverage of ± 6 kHz, we achieved a temperature sensitivity of 1.45%/K, resulting in a minimum detectable temperature change of ∼2.5 K for an SNR of 100 with a temporal resolution of 6 s per frame. The proposed 2DMSI thermometry has the potential to allow MR-guided FUS treatments of patients with metallic hardware and therefore expand its reach to a larger patient population. Magn Reson Med 80:259-271, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  2. Compiling quantum circuits to realistic hardware architectures using temporal planners

    Science.gov (United States)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  3. Desenvolvimento e aprimoramento de um sistema computacional- Ikapp- de suporte a reabilitação motora

    Directory of Open Access Journals (Sweden)

    Déborah Marques de Oliveira

    2013-06-01

    Full Text Available A aplicabilidade das Tecnologias Interativas (TIs na área de saúde, em particular na reabilitação motora, tem sido uma alternativa clínica usada com intuito de estimular maior engajamento do paciente ao seu processo de recuperação que por vezes é extenuante. O presente estudo descreve uma ferramenta tecnológica -Ikapp- de suporte a reabilitação motora. Ferramenta essa que busca ampliar as possibilidades dos dispositivos comerciais já existentes no contexto clínico. Sessenta (60 voluntários foram convidados a interagir com as interfaces do setup e do jogo do Ikapp com objetivo de examinar a funcionalidade, grau de aceitação, demandas e limitações para aprimoramentos. Os resultados do presente estudo demonstram altos índices de satisfação pelos participantes. Além disso, os resultados demonstraram que o Ikapp é uma ferramenta que agrega valores terapêuticos à ludicidade e motivação de acordo com a perspectiva dos participantes.

  4. Importância da utilização de propriedades avaliadas em função da temperatura para a simulação computacional de cerâmicas refratárias

    Directory of Open Access Journals (Sweden)

    Akiyoshi M. M.

    2002-01-01

    Full Text Available Neste trabalho é apresentado um estudo sistemático sobre a influência da utilização de propriedades avaliadas em função da temperatura para suprir um programa de simulação computacional por elementos finitos (AEF visando à determinação dos perfis de temperatura e tensão em uma âncora refratária. Para tanto, foram avaliados em função da temperatura, a condutividade térmica (k, o calor específico (c, o coeficiente de expansão térmica linear (alfaL e o módulo elástico (E. Um planejamento fatorial 2u e a análise de variância (ANOVA foram utilizados para avaliar a influência das interações entre as propriedades determinadas em função da temperatura sobre os perfis de temperatura e tensão normal resultantes da simulação computacional. Este estudo reforça a necessidade da avaliação das propriedades em função da temperatura para se suprir um programa de simulação computacional, destacando-se a condutividade térmica e o calor específico para propiciar uma melhor obtenção do perfil de temperatura, e o coeficiente de expansão térmica linear (alfaL e o módulo elástico (E para a avaliação do perfil de tensões.

  5. Flight Hardware Virtualization for On-Board Science Data Processing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  6. Modelagem computacional para predição de equilibrio líquido-líquido de sistemas graxos

    OpenAIRE

    Gláucia de Freitas Hirata

    2011-01-01

    Resumo: Na indústria de óleos, a remoção dos ácidos graxos livres é a etapa mais importante do processo de purificação. Geralmente, é realizada pelo refino químico ou físico. Em alguns casos, no entanto, o refino convencional provoca resultados indesejáveis. A desacidificação por extração líquido-líquido tem se mostrado uma alternativa viável tecnicamente. Nos estudos realizados, os dados de equilíbrio são determinados e modelados para cada tipo de óleo isoladamente, resultando em modelos que...

  7. GOSH! A roadmap for open-source science hardware

    CERN Multimedia

    Stefania Pandolfi

    2016-01-01

    The goal of the Gathering for Open Science Hardware (GOSH! 2016), held from 2 to 5 March 2016 at IdeaSquare, was to lay the foundations of the open-source hardware for science movement.   The participants in the GOSH! 2016 meeting gathered in IdeaSquare. (Image: GOSH Community) “Despite advances in technology, many scientific innovations are held back because of a lack of affordable and customisable hardware,” says François Grey, a professor at the University of Geneva and coordinator of Citizen Cyberlab – a partnership between CERN, the UN Institute for Training and Research and the University of Geneva – which co-organised the GOSH! 2016 workshop. “This scarcity of accessible science hardware is particularly obstructive for citizen science groups and humanitarian organisations that don’t have the same economic means as a well-funded institution.” Instead, open sourcing science hardware co...

  8. Fast DRR splat rendering using common consumer graphics hardware

    International Nuclear Information System (INIS)

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-01-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10 6 voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine

  9. Un análisis computacional de las líneas prehistóricas: diseños geométricos y lenguaje

    Directory of Open Access Journals (Sweden)

    Víctor Manuel LONGA

    2013-06-01

    Full Text Available El enfoque usual en Paleoantropología y Arqueología ha sido analizar los restos prehistóricos desde la perspectiva de la conducta con la que pudieron asociarse –simbólica, tecnológica, social, etc. En lo que respecta al lenguaje, la presencia de objetos simbólicos en el registro arqueológico se ha tomado como indicador automático de la existencia de lenguaje complejo en la Prehistoria. Este trabajo presenta un enfoque muy diferente: analizar los restos prehistóricos desde la perspectiva de los procesos y las capacidades computacionales mentales requeridas para producir esos objetos. Esta perspectiva deja de lado la ‘semántica’ de las piezas –su posible carácter simbólico o representacional–, para centrarse en el análisis de rasgos puramente formales que revelen una complejidad computacional semejante a la del lenguaje. El artículo analiza desde esa perspectiva (1 los diseños geométricos producidos en el Paleolítico medio e inferior de Eurasia por especies como Homo neanderthalensis y quizás Homo heidelbergensis, y (2 los diseños geométricos producidos durante la Edad de la Piedra Media africana por los Humanos Anatómicamente Modernos. La comparación en términos computacionales entre ambos tipos de diseños permite inferir el tipo de lenguaje asociado a esas especies.

  10. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-05-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally implementing chaotic systems. Subsequently, some techniques to eliminate such defects, including the ultimately adopted scheme are listed and explained in detail. Moreover, the thesis describes original work on the design of an encryption system to encrypt MPEG-2 video streams. Information about the MPEG-2 standard that fits this design context is presented. Then, the security of the proposed system is exhaustively analyzed and the performance is compared with other reported systems, showing superiority in performance and security. The thesis focuses more on the hardware and the circuit aspect of the system’s design. The system is realized on Xilinx Vetrix-4 FPGA with hardware parameters and throughput performance surpassing conventional encryption systems.

  11. Hardware and software maintenance strategies for upgrading vintage computers

    International Nuclear Information System (INIS)

    Wang, B.C.; Buijs, W.J.; Banting, R.D.

    1992-01-01

    The paper focuses on the maintenance of the computer hardware and software for digital control computers (DCC). Specific design and problems related to various maintenance strategies are reviewed. A foundation was required for a reliable computer maintenance and upgrading program to provide operation of the DCC with high availability and reliability for 40 years. This involved a carefully planned and executed maintenance and upgrading program, involving complementary hardware and software strategies. The computer system was designed on a modular basis, with large sections easily replaceable, to facilitate maintenance and improve availability of the system. Advances in computer hardware have made it possible to replace DCC peripheral devices with reliable, inexpensive, and widely available components from PC-based systems (PC = personal computer). By providing a high speed link from the DCC to a PC, it is now possible to use many commercial software packages to process data from the plant. 1 fig

  12. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salama S.; Alouini, Mohamed-Slim

    2017-01-01

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  13. Hardware controls for the STAR experiment at RHIC

    International Nuclear Information System (INIS)

    Reichhold, D.; Bieser, F.; Bordua, M.; Cherney, M.; Chrin, J.; Dunlop, J.C.; Ferguson, M.I.; Ghazikhanian, V.; Gross, J.; Harper, G.; Howe, M.; Jacobson, S.; Klein, S.R.; Kravtsov, P.; Lewis, S.; Lin, J.; Lionberger, C.; LoCurto, G.; McParland, C.; McShane, T.; Meier, J.; Sakrejda, I.; Sandler, Z.; Schambach, J.; Shi, Y.; Willson, R.; Yamamoto, E.; Zhang, W.

    2003-01-01

    The STAR detector sits in a high radiation area when operating normally; therefore it was necessary to develop a robust system to remotely control all hardware. The STAR hardware controls system monitors and controls approximately 14,000 parameters in the STAR detector. Voltages, currents, temperatures, and other parameters are monitored. Effort has been minimized by the adoption of experiment-wide standards and the use of pre-packaged software tools. The system is based on the Experimental Physics and Industrial Control System (EPICS) . VME processors communicate with subsystem-based sensors over a variety of field busses, with High-level Data Link Control (HDLC) being the most prevalent. Other features of the system include interfaces to accelerator and magnet control systems, a web-based archiver, and C++-based communication between STAR online, run control and hardware controls and their associated databases. The system has been designed for easy expansion as new detector elements are installed in STAR

  14. Plutonium Protection System (PPS). Volume 2. Hardware description. Final report

    International Nuclear Information System (INIS)

    Miyoshi, D.S.

    1979-05-01

    The Plutonium Protection System (PPS) is an integrated safeguards system developed by Sandia Laboratories for the Department of Energy, Office of Safeguards and Security. The system is designed to demonstrate and test concepts for the improved safeguarding of plutonium. Volume 2 of the PPS final report describes the hardware elements of the system. The major areas containing hardware elements are the vault, where plutonium is stored, the packaging room, where plutonium is packaged into Container Modules, the Security Operations Center, which controls movement of personnel, the Material Accountability Center, which maintains the system data base, and the Material Operations Center, which monitors the operating procedures in the system. References are made to documents in which details of the hardware items can be found

  15. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah

    2017-02-22

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  16. Optimized hardware design for the divertor remote handling control system

    Energy Technology Data Exchange (ETDEWEB)

    Saarinen, Hannu [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland)], E-mail: hannu.saarinen@tut.fi; Tiitinen, Juha; Aha, Liisa; Muhammad, Ali; Mattila, Jouni; Siuko, Mikko; Vilenius, Matti [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland); Jaervenpaeae, Jorma [VTT Systems Engineering, Tekniikankatu 1, 33720 Tampere (Finland); Irving, Mike; Damiani, Carlo; Semeraro, Luigi [Fusion for Energy, Josep Pla 2, Torres Diagonal Litoral B3, 08019 Barcelona (Spain)

    2009-06-15

    A key ITER maintenance activity is the exchange of the divertor cassettes. One of the major focuses of the EU Remote Handling (RH) programme has been the study and development of the remote handling equipment necessary for divertor exchange. The current major step in this programme involves the construction of a full scale physical test facility, namely DTP2 (Divertor Test Platform 2), in which to demonstrate and refine the RH equipment designs for ITER using prototypes. The major objective of the DTP2 project is the proof of concept studies of various RH devices, but is also important to define principles for standardizing control hardware and methods around the ITER maintenance equipment. This paper focuses on describing the control system hardware design optimization that is taking place at DTP2. Here there will be two RH movers, namely the Cassette Multifuctional Mover (CMM), Cassette Toroidal Mover (CTM) and assisting water hydraulic force feedback manipulators (WHMAN) located aboard each Mover. The idea here is to use common Real Time Operating Systems (RTOS), measurement and control IO-cards etc. for all maintenance devices and to standardize sensors and control components as much as possible. In this paper, new optimized DTP2 control system hardware design and some initial experimentation with the new DTP2 RH control system platform are presented. The proposed new approach is able to fulfil the functional requirements for both Mover and Manipulator control systems. Since the new control system hardware design has reduced architecture there are a number of benefits compared to the old approach. The simplified hardware solution enables the use of a single software development environment and a single communication protocol. This will result in easier maintainability of the software and hardware, less dependence on trained personnel, easier training of operators and hence reduced the development costs of ITER RH.

  17. Electrical, electronics, and digital hardware essentials for scientists and engineers

    CERN Document Server

    Lipiansky, Ed

    2012-01-01

    A practical guide for solving real-world circuit board problems Electrical, Electronics, and Digital Hardware Essentials for Scientists and Engineers arms engineers with the tools they need to test, evaluate, and solve circuit board problems. It explores a wide range of circuit analysis topics, supplementing the material with detailed circuit examples and extensive illustrations. The pros and cons of various methods of analysis, fundamental applications of electronic hardware, and issues in logic design are also thoroughly examined. The author draws on more than tw

  18. Automating an EXAFS facility: hardware and software considerations

    International Nuclear Information System (INIS)

    Georgopoulos, P.; Sayers, D.E.; Bunker, B.; Elam, T.; Grote, W.A.

    1981-01-01

    The basic design considerations for computer hardware and software, applicable not only to laboratory EXAFS facilities, but also to synchrotron installations, are reviewed. Uniformity and standardization of both hardware configurations and program packages for data collection and analysis are heavily emphasized. Specific recommendations are made with respect to choice of computers, peripherals, and interfaces, and guidelines for the development of software packages are set forth. A description of two working computer-interfaced EXAFS facilities is presented which can serve as prototypes for future developments. 3 figures

  19. Surface moisture measurement system hardware acceptance test report

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.A., Westinghouse Hanford

    1996-05-28

    This document summarizes the results of the hardware acceptance test for the Surface Moisture Measurement System (SMMS). This test verified that the mechanical and electrical features of the SMMS functioned as designed and that the unit is ready for field service. The bulk of hardware testing was performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. The SMMS was developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks.

  20. Hardware Evaluation of the Horizontal Exercise Fixture with Weight Stack

    Science.gov (United States)

    Newby, Nate; Leach, Mark; Fincke, Renita; Sharp, Carwyn

    2009-01-01

    HEF with weight stack seems to be a very sturdy and reliable exercise device that should function well in a bed rest training setting. A few improvements should be made to both the hardware and software to improve usage efficiency, but largely, this evaluation has demonstrated HEF's robustness. The hardware offers loading to muscles, bones, and joints, potentially sufficient to mitigate the loss of muscle mass and bone mineral density during long-duration bed rest campaigns. With some minor modifications, the HEF with weight stack equipment provides the best currently available means of performing squat, heel raise, prone row, bench press, and hip flexion/extension exercise in a supine orientation.

  1. Computer organization and design the hardware/software interface

    CERN Document Server

    Hennessy, John L

    1994-01-01

    Computer Organization and Design: The Hardware/Software Interface presents the interaction between hardware and software at a variety of levels, which offers a framework for understanding the fundamentals of computing. This book focuses on the concepts that are the basis for computers.Organized into nine chapters, this book begins with an overview of the computer revolution. This text then explains the concepts and algorithms used in modern computer arithmetic. Other chapters consider the abstractions and concepts in memory hierarchies by starting with the simplest possible cache. This book di

  2. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-01-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  3. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-05-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  4. Integrated circuit authentication hardware Trojans and counterfeit detection

    CERN Document Server

    Tehranipoor, Mohammad; Zhang, Xuehui

    2013-01-01

    This book describes techniques to verify the authenticity of integrated circuits (ICs). It focuses on hardware Trojan detection and prevention and counterfeit detection and prevention. The authors discuss a variety of detection schemes and design methodologies for improving Trojan detection techniques, as well as various attempts at developing hardware Trojans in IP cores and ICs. While describing existing Trojan detection methods, the authors also analyze their effectiveness in disclosing various types of Trojans, and demonstrate several architecture-level solutions. 

  5. Hardware-assisted software clock synchronization for homogeneous distributed systems

    Science.gov (United States)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  6. HyperCAL3D, uma ferramenta computacional para o apoio do processo de ensino-aprendizagem de geometria descritiva

    Directory of Open Access Journals (Sweden)

    Fábio Gonçalves Teixeira

    2013-12-01

    Full Text Available Este trabalho apresenta o HyperCAL3D, um aplicativo de apoio ao ensino de Geometria Descritiva, através do estudo de objetos sólidos. São descritas a metodologia utilizada para a sua implementação e as principais funcionalidades do aplicativo. Foi realizada uma seleção de conceitos a fim de determinar a estrutura funcional que o software deveria atender. A partir disso, foram modeladas as principais funcionalidades através de processos de Geometria vetorial equivalentes aos utilizados na Geometria Descritiva. Dentre as principais funcionalidades apresentadas destacam-se: o processo de projeção, a representação de linhas ocultas no modelo tridimensional e nas projeções, vistas auxiliares sucessivas em tempo real e em 3D, representação em épura e o processo de interseção. Todas estas ferramentas são implementadas em um aplicativo que auxilia o processo de aprendizagem dos alunos e os procedimentos didáticos dos professores.

  7. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  8. Hardware Algorithms For Tile-Based Real-Time Rendering

    NARCIS (Netherlands)

    Crisu, D.

    2012-01-01

    In this dissertation, we present the GRAphics AcceLerator (GRAAL) framework for developing embedded tile-based rasterization hardware for mobile devices, meant to accelerate real-time 3-D graphics (OpenGL compliant) applications. The goal of the framework is a low-cost, low-power, high-performance

  9. Hardware and software techniques for boiler operation and management

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Hiroshi (Hirakawa Iron Works, Ltd., Osaka (Japan))

    1989-04-01

    A study was conducted on the requirements for easy-operable boiler from the view points of hardware and software technologies. Relation among efficiency, energy-saving, and economics, and control of total emission regarding low NOx operation, were explained, with suggestion of orientation to developed necessary hard- and soft- ware for the realization. 8 figs.

  10. Chip-Multiprocessor Hardware Locks for Safety-Critical Java

    DEFF Research Database (Denmark)

    Strøm, Torur Biskopstø; Puffitsch, Wolfgang; Schoeberl, Martin

    2013-01-01

    and may void a task set's schedulability. In this paper we present a hardware locking mechanism to reduce the synchronization overhead. The solution is implemented for the chip-multiprocessor version of the Java Optimized Processor in the context of safety-critical Java. The implementation is compared...

  11. PACE: A dynamic programming algorithm for hardware/software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper presents the PACE partitioning algorithm which is used in the LYCOS co-synthesis system for partitioning control/dataflow graphs into hardware and software parts. The algorithm is a dynamic programming algorithm which solves both the problem of minimizing system execution time...

  12. A selective logging mechanism for hardware transactional memory systems

    OpenAIRE

    Lupon Navazo, Marc; Magklis, Grigorios; González Colás, Antonio María

    2011-01-01

    Log-based Hardware Transactional Memory (HTM) systems offer an elegant solution to handle speculative data that overflow transactional L1 caches. By keeping the pre-transactional values on a software-resident log, speculative values can be safely moved across the memory hierarchy, without requiring expensive searches on L1 misses or commits.

  13. Hardware, Languages, and Architectures for Defense Against Hostile Operating Systems

    Science.gov (United States)

    2015-05-14

    complex instruction sets. The scale of this problem is multiplied by the diversity of hardware platforms in deployment today. We developed a novel approach...www.seclab.cs.sunysb.edu/seclab/lbc/. Professor King has been invited to and has given lectures at the NSA, Sandia, DARPA, Intel, Microsoft, Samsung

  14. Hardware prototype with component specification and usage description

    NARCIS (Netherlands)

    Azam, Tre; Aswat, Soyeb; Klemke, Roland; Sharma, Puneet; Wild, Fridolin

    2017-01-01

    Following on from D3.1 and the final selection of sensors, in this D3.2 report we present the first version of the experience capturing hardware prototype design and API architecture taking into account the current limitations of the Hololens not being available until early next month in time for

  15. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Sheng-Ying Lai

    2013-11-01

    Full Text Available This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA and fuzzy C-means (FCM algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA. It is embedded in a System-on-Chip (SOC platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  16. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Science.gov (United States)

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-01-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation. PMID:24189331

  17. Another way of doing RSA cryptography in hardware

    NARCIS (Netherlands)

    Batina, L.; Bruin - Muurling, G.; Honary, B.

    2001-01-01

    In this paper we describe an efficient and secure hardware implementation of the RSA cryptosystem. Modular exponentiation is based on Montgomery’s method without any modular reduction achieving the optimal bound. The presented systolic array architecture is scalable in severalparameters which makes

  18. Foundations of digital signal processing theory, algorithms and hardware design

    CERN Document Server

    Gaydecki, Patrick

    2005-01-01

    An excellent introductory text, this book covers the basic theoretical, algorithmic and real-time aspects of digital signal processing (DSP). Detailed information is provided on off-line, real-time and DSP programming and the reader is effortlessly guided through advanced topics such as DSP hardware design, FIR and IIR filter design and difference equation manipulation.

  19. Hardware Descriptive Languages: An Efficient Approach to Device ...

    African Journals Online (AJOL)

    Contemporarily, owing to astronomical advancements in the very large scale integration (VLSI) market segments, hardware engineers are now focusing on how to develop their new digital system designs in programmable languages like very high speed integrated circuit hardwaredescription language (VHDL) and Verilog ...

  20. Detecting System of Nested Hardware Virtual Machine Monitor

    Directory of Open Access Journals (Sweden)

    Artem Vladimirovich Iuzbashev

    2015-03-01

    Full Text Available Method of nested hardware virtual machine monitor detection was proposed in this work. The method is based on HVM timing attack. In case of HVM presence in system, the number of different instruction sequences execution time values will increase. We used this property as indicator in our detection.

  1. CT image reconstruction system based on hardware implementation

    International Nuclear Information System (INIS)

    Silva, Hamilton P. da; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Zibetti, Marcelo; Hormaza, Joel M.; Lopes, Ricardo T.

    2009-01-01

    Full text: The timing factor is very important for medical imaging systems, which can nowadays be synchronized by vital human signals, like heartbeats or breath. The use of hardware implemented devices in such a system has advantages considering the high speed of information treatment combined with arbitrary low cost on the market. This article refers to a hardware system which is based on electronic programmable logic called FPGA, model Cyclone II from ALTERA Corporation. The hardware was implemented on the UP3 ALTERA Kit. A partially connected neural network with unitary weights was programmed. The system was tested with 60 topographic projections, 100 points in each, of the Shepp and Logan phantom created by MATLAB. The main restriction was found to be the memory size available on the device: the dynamic range of reconstructed image was limited to 0 65535. Also, the normalization factor must be observed in order to do not saturate the image during the reconstruction and filtering process. The test shows a principal possibility to build CT image reconstruction systems for any reasonable amount of input data by arranging the parallel work of the hardware units like we have tested. However, further studies are necessary for better understanding of the error propagation from topographic projections to reconstructed image within the implemented method. (author)

  2. Lab at Home: Hardware Kits for a Digital Design Lab

    Science.gov (United States)

    Oliver, J. P.; Haim, F.

    2009-01-01

    An innovative laboratory methodology for an introductory digital design course is presented. Instead of having traditional lab experiences, where students have to come to school classrooms, a "lab at home" concept is proposed. Students perform real experiments in their own homes, using hardware kits specially developed for this purpose. They…

  3. 3D IBFV : Hardware-Accelerated 3D Flow Visualization

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a

  4. Enabling Self-Organization in Embedded Systems with Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Christophe Bobda

    2009-01-01

    Full Text Available We present a methodology based on self-organization to manage resources in networked embedded systems based on reconfigurable hardware. Two points are detailed in this paper, the monitoring system used to analyse the system and the Local Marketplaces Global Symbiosis (LMGS concept defined for self-organization of dynamically reconfigurable nodes.

  5. Generalized Distance Transforms and Skeletons in Graphics Hardware

    NARCIS (Netherlands)

    Strzodka, R.; Telea, A.

    2004-01-01

    We present a framework for computing generalized distance transforms and skeletons of two-dimensional objects using graphics hardware. Our method is based on the concept of footprint splatting. Combining different splats produces weighted distance transforms for different metrics, as well as the

  6. 3D IBFV : hardware-accelerated 3D flow visualization

    NARCIS (Netherlands)

    Telea, A.C.; Wijk, van J.J.

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique presented by van Wijk (2001) for 2D flow visualization in two main directions. First, we decompose the 3D

  7. Smart Home Hardware-in-the-Loop Testing

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, Annabelle

    2017-07-12

    This presentation provides a high-level overview of NREL's smart home hardware-in-the-loop testing. It was presented at the Fourth International Workshop on Grid Simulator Testing of Energy Systems and Wind Turbine Powertrains, held April 25-26, 2017, hosted by NREL and Clemson University at the Energy Systems Integration Facility in Golden, Colorado.

  8. Motion compensation in digital subtraction angiography using graphics hardware.

    Science.gov (United States)

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  9. Hardware availability calculations and results of the IFMIF accelerator facility

    International Nuclear Information System (INIS)

    Bargalló, Enric; Arroyo, Jose Manuel; Abal, Javier; Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne; Weber, Moisés; Podadera, Ivan; Grespan, Francesco; Fagotti, Enrico; De Blas, Alfredo; Dies, Javier; Tapia, Carlos; Mollá, Joaquín; Ibarra, Ángel

    2014-01-01

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design

  10. Hardware availability calculations and results of the IFMIF accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne [Commissariat à l’Energie Atomique, Saclay (France); Weber, Moisés; Podadera, Ivan [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Grespan, Francesco; Fagotti, Enrico [Istituto Nazionale di Fisica Nucleare, Legnaro (Italy); De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design.

  11. Combining hardware and simulation for datacenter scaling studies

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Pilimon, Artur; Thrane, Jakob

    2017-01-01

    and simulation to illustrate the scalability and performance of datacenter networks. We simulate a Datacenter network and interconnect it with real world traffic generation hardware. Analysis of the introduced packet conversion and virtual queueing delays shows that the conversion efficiency is at the order...

  12. Hiding State in CλaSH Hardware Descriptions

    NARCIS (Netherlands)

    Gerards, Marco Egbertus Theodorus; Baaij, C.P.R.; Kuper, Jan; Kooijman, Matthijs

    Synchronous hardware can be modelled as a mapping from input and state to output and a new state, such mappings are referred to as transition functions. It is natural to use a functional language to implement transition functions. The CaSH compiler is capable of translating transition functions to

  13. Towards Shop Floor Hardware Reconfiguration for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2016-01-01

    In this paper we propose a roadmap for hardware reconfiguration of industrial collaborative robots. As a flexible resource, the collaborative robot will often need transitioning to a new task. Our goal is, that this transitioning should be done by the shop floor operators, not highly specialized...

  14. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  15. Tomographic image reconstruction and rendering with texture-mapping hardware

    International Nuclear Information System (INIS)

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties

  16. Hardware realization of an SVM algorithm implemented in FPGAs

    Science.gov (United States)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  17. Towards automated construction of dependable software/hardware systems

    Energy Technology Data Exchange (ETDEWEB)

    Yakhnis, A.; Yakhnis, V. [Pioneer Technologies & Rockwell Science Center, Albuquerque, NM (United States)

    1997-11-01

    This report contains viewgraphs on the automated construction of dependable computer architecture systems. The outline of this report is: examples of software/hardware systems; dependable systems; partial delivery of dependability; proposed approach; removing obstacles; advantages of the approach; criteria for success; current progress of the approach; and references.

  18. Improving Reliability, Security, and Efficiency of Reconfigurable Hardware Systems (Habilitation)

    NARCIS (Netherlands)

    Ziener, Daniel

    2017-01-01

    In this treatise,  my research on methods to improve efficiency, reliability, and security of reconfigurable hardware systems, i.e., FPGAs, through partial dynamic reconfiguration is outlined. The efficiency of reconfigurable systems can be improved by loading optimized data paths on-the-fly on an

  19. Evaluation of In-House versus Contract Computer Hardware Maintenance

    International Nuclear Information System (INIS)

    Wright, H.P.

    1981-09-01

    The issue of In-House versus Contract Computer Hardware Maintenance is one which every organization who uses computers must resolve. This report discusses the advantages and disadvantages of both approaches to computer maintenance, the costs involved (based on the current AGNS computer inventory), and the AGNS maintenance experience to date. A recommendation on an appropriate approach for AGNS is made

  20. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  1. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  2. Detection of hardware backdoor through microcontroller read time ...

    African Journals Online (AJOL)

    The objective of this work, christened “HABA” (Hardware Backdoor Aware) is to collect data samples of series of read time of microcontroller embedded on military grade equipments and correlate it with previously stored expected behavior read time samples so as to detect abnormality or otherwise. I was motivated by the ...

  3. Hardware Transactional Memory Optimization Guidelines, Applied to Ordered Maps

    DEFF Research Database (Denmark)

    Bonnichsen, Lars Frydendal; Probst, Christian W.; Karlsson, Sven

    2015-01-01

    efficiently requires reasoning about those differences. In this paper we present 5 guidelines for applying hardware transactional memory efficiently, and apply the guidelines to BT-trees, a concurrent ordered map. Evaluating BT-trees on standard benchmarks shows that they are up to 5.3 times faster than...

  4. Flight Hardware Packaging Design for Stringent EMC Radiated Emission Requirements

    Science.gov (United States)

    Lortz, Charlene L.; Huang, Chi-Chien N.; Ravich, Joshua A.; Steiner, Carl N.

    2013-01-01

    This packaging design approach can help heritage hardware meet a flight project's stringent EMC radiated emissions requirement. The approach requires only minor modifications to a hardware's chassis and mainly concentrates on its connector interfaces. The solution is to raise the surface area where the connector is mounted by a few millimeters using a pedestal, and then wrapping with conductive tape from the cable backshell down to the surface-mounted connector. This design approach has been applied to JPL flight project subsystems. The EMC radiated emissions requirements for flight projects can vary from benign to mission critical. If the project's EMC requirements are stringent, the best approach to meet EMC requirements would be to design an EMC control program for the project early on and implement EMC design techniques starting with the circuit board layout. This is the ideal scenario for hardware that is built from scratch. Implementation of EMC radiated emissions mitigation techniques can mature as the design progresses, with minimal impact to the design cycle. The real challenge exists for hardware that is planned to be flown following a built-to-print approach, in which heritage hardware from a past project with a different set of requirements is expected to perform satisfactorily for a new project. With acceptance of heritage, the design would already be established (circuit board layout and components have already been pre-determined), and hence any radiated emissions mitigation techniques would only be applicable at the packaging level. The key is to take a heritage design with its known radiated emissions spectrum and repackage, or modify its chassis design so that it would have a better chance of meeting the new project s radiated emissions requirements.

  5. Fast and Reliable Mouse Picking Using Graphics Hardware

    Directory of Open Access Journals (Sweden)

    Hanli Zhao

    2009-01-01

    Full Text Available Mouse picking is the most commonly used intuitive operation to interact with 3D scenes in a variety of 3D graphics applications. High performance for such operation is necessary in order to provide users with fast responses. This paper proposes a fast and reliable mouse picking algorithm using graphics hardware for 3D triangular scenes. Our approach uses a multi-layer rendering algorithm to perform the picking operation in linear time complexity. The objectspace based ray-triangle intersection test is implemented in a highly parallelized geometry shader. After applying the hardware-supported occlusion queries, only a small number of objects (or sub-objects are rendered in subsequent layers, which accelerates the picking efficiency. Experimental results demonstrate the high performance of our novel approach. Due to its simplicity, our algorithm can be easily integrated into existing real-time rendering systems.

  6. Hardware emulation of Memristor based Ternary Content Addressable Memory

    KAUST Repository

    Bahloul, Mohamed A.

    2017-12-13

    MTCAM (Memristor Ternary Content Addressable Memory) is a special purpose storage medium in which data could be retrieved based on the stored content. Using Memristors as the main storage element provides the potential of achieving higher density and more efficient solutions than conventional methods. A key missing item in the validation of such approaches is the wide spread availability of hardware emulation platforms that can provide reliable and repeatable performance statistics. In this paper, we present a hardware MTCAM emulation based on 2-Transistors-2Memristors (2T2M) bit-cell. It builds on a bipolar memristor model with storing and fetching capabilities based on the actual current-voltage behaviour. The proposed design offers a flexible verification environment with quick design revisions, high execution speeds and powerful debugging techniques. The proposed design is modeled using VHDL and prototyped on Xilinx Virtex® FPGA.

  7. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Science.gov (United States)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  8. The LISA Pathfinder interferometry-hardware and system testing

    Energy Technology Data Exchange (ETDEWEB)

    Audley, H; Danzmann, K; MarIn, A Garcia; Heinzel, G; Monsky, A; Nofrarias, M; Steier, F; Bogenstahl, J [Albert-Einstein-Institut, Max-Planck-Institut fuer Gravitationsphysik und Universitaet Hannover, 30167 Hannover (Germany); Gerardi, D; Gerndt, R; Hechenblaikner, G; Johann, U; Luetzow-Wentzky, P; Wand, V [EADS Astrium GmbH, Friedrichshafen (Germany); Antonucci, F [Dipartimento di Fisica, Universita di Trento and INFN, Gruppo Collegato di Trento, 38050 Povo, Trento (Italy); Armano, M [European Space Astronomy Centre, European Space Agency, Villanueva de la Canada, 28692 Madrid (Spain); Auger, G; Binetruy, P [APC UMR7164, Universite Paris Diderot, Paris (France); Benedetti, M [Dipartimento di Ingegneria dei Materiali e Tecnologie Industriali, Universita di Trento and INFN, Gruppo Collegato di Trento, Mesiano, Trento (Italy); Boatella, C, E-mail: antonio.garcia@aei.mpg.de [CNES, DCT/AQ/EC, 18 Avenue Edouard Belin, 31401 Toulouse, Cedex 9 (France)

    2011-05-07

    Preparations for the LISA Pathfinder mission have reached an exciting stage. Tests of the engineering model (EM) of the optical metrology system have recently been completed at the Albert Einstein Institute, Hannover, and flight model tests are now underway. Significantly, they represent the first complete integration and testing of the space-qualified hardware and are the first tests on an optical system level. The results and test procedures of these campaigns will be utilized directly in the ground-based flight hardware tests, and subsequently during in-flight operations. In addition, they allow valuable testing of the data analysis methods using the MATLAB-based LTP data analysis toolbox. This paper presents an overview of the results from the EM test campaign that was successfully completed in December 2009.

  9. Verification of OpenSSL version via hardware performance counters

    Science.gov (United States)

    Bruska, James; Blasingame, Zander; Liu, Chen

    2017-05-01

    Many forms of malware and security breaches exist today. One type of breach downgrades a cryptographic program by employing a man-in-the-middle attack. In this work, we explore the utilization of hardware events in conjunction with machine learning algorithms to detect which version of OpenSSL is being run during the encryption process. This allows for the immediate detection of any unknown downgrade attacks in real time. Our experimental results indicated this detection method is both feasible and practical. When trained with normal TLS and SSL data, our classifier was able to detect which protocol was being used with 99.995% accuracy. After the scope of the hardware event recording was enlarged, the accuracy diminished greatly, but to 53.244%. Upon removal of TLS 1.1 from the data set, the accuracy returned to 99.905%.

  10. Parallel random number generator for inexpensive configurable hardware cells

    Science.gov (United States)

    Ackermann, J.; Tangen, U.; Bödekker, B.; Breyer, J.; Stoll, E.; McCaskill, J. S.

    2001-11-01

    A new random number generator ( RNG) adapted to parallel processors has been created. This RNG can be implemented with inexpensive hardware cells. The correlation between neighboring cells is suppressed with smart connections. With such connection structures, sequences of pseudo-random numbers are produced. Numerical tests including a self-avoiding random walk test and the simulation of the order parameter and energy of the 2D Ising model give no evidence for correlation in the pseudo-random sequences. Because the new random number generator has suppressed the correlation between neighboring cells which is usually observed in cellular automaton implementations, it is applicable for extended time simulations. It gives an immense speed-up factor if implemented directly in configurable hardware, and has recently been used for long time simulations of spatially resolved molecular evolution.

  11. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2013-01-01

    The 5th edition of Computer Organization and Design moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures. Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, "Going Faster," used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the "Eight Great Ideas" of computer architecture. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O. Optimization techniques featured throughout the text. It covers parallelism in depth with...

  12. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  13. Summary of multi-core hardware and programming model investigations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2008-05-01

    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

  14. Web tools to monitor and debug DAQ hardware

    International Nuclear Information System (INIS)

    Desavouret, Eugene; Nogiec, Jerzy M.

    2003-01-01

    A web-based toolkit to monitor and diagnose data acquisition hardware has been developed. It allows for remote testing, monitoring, and control of VxWorks data acquisition computers and associated instrumentation using the HTTP protocol and a web browser. This solution provides concurrent and platform independent access, supplementary to the standard single-user rlogin mechanism. The toolkit is based on a specialized web server, and allows remote access and execution of select system commands and tasks, execution of test procedures, and provides remote monitoring of computer system resources and connected hardware. Various DAQ components such as multiplexers, digital I/O boards, analog to digital converters, or current sources can be accessed and diagnosed remotely in a uniform and well-organized manner. Additionally, the toolkit application supports user authentication and is able to enforce specified access restrictions

  15. Development of Hardware and Software for Automated Ultrasonic Testing

    International Nuclear Information System (INIS)

    Choi, Sung Nam; Lee, Hee Jong; Yang, Seung Ok

    2012-01-01

    Nondestructive testing (NDT) for the construction and operating of NPPs plays an important role in confirming the integrity of the NPPs. Especially, Automated ultrasonic testing (AUT) is one of the primary nondestructive examination methods for in-service inspection of the welding parts in major components in NPPs. AUT is a reliable nondestructive testing because the data of AUT are saved and reviewed with other examiners. Korea Hydro and Nuclear Power-Central Research Institute (KHNP-CRI) has developed an automated ultrasonic testing (AUT) system based on a high speed pulser-receiver. In combination with the designed software and hardware architecture, this new system permits user configurations for a wide range of user-specific applications through fully automated inspections using compact portable systems with up to eight channels. This paper gives an overview of hardware (H/W) and software (S/W) for the AUT system to inspect welds in NPPs

  16. Hardware emulation of Memristor based Ternary Content Addressable Memory

    KAUST Repository

    Bahloul, Mohamed A.; Naous, Rawan; Masmoudi, M.

    2017-01-01

    MTCAM (Memristor Ternary Content Addressable Memory) is a special purpose storage medium in which data could be retrieved based on the stored content. Using Memristors as the main storage element provides the potential of achieving higher density and more efficient solutions than conventional methods. A key missing item in the validation of such approaches is the wide spread availability of hardware emulation platforms that can provide reliable and repeatable performance statistics. In this paper, we present a hardware MTCAM emulation based on 2-Transistors-2Memristors (2T2M) bit-cell. It builds on a bipolar memristor model with storing and fetching capabilities based on the actual current-voltage behaviour. The proposed design offers a flexible verification environment with quick design revisions, high execution speeds and powerful debugging techniques. The proposed design is modeled using VHDL and prototyped on Xilinx Virtex® FPGA.

  17. Hardware support for CSP on a Java chip multiprocessor

    DEFF Research Database (Denmark)

    Gruian, Flavius; Schoeberl, Martin

    2013-01-01

    Due to memory bandwidth limitations, chip multiprocessors (CMPs) adopting the convenient shared memory model for their main memory architecture scale poorly. On-chip core-to-core communication is a solution to this problem, that can lead to further performance increase for a number of multithreaded...... applications. Programmatically, the Communicating Sequential Processes (CSPs) paradigm provides a sound computational model for such an architecture with message based communication. In this paper we explore hardware support for CSP in the context of an embedded Java CMP. The hardware support for CSP are on......-chip communication channels, implemented by a ring-based network-on-chip (NoC), to reduce the memory bandwidth pressure on the shared memory.The presented solution is scalable and also specific for our limited resources and real-time predictability requirements. CMP architectures of three to eight processors were...

  18. Advances in neuromorphic hardware exploiting emerging nanoscale devices

    CERN Document Server

    2017-01-01

    This book covers all major aspects of cutting-edge research in the field of neuromorphic hardware engineering involving emerging nanoscale devices. Special emphasis is given to leading works in hybrid low-power CMOS-Nanodevice design. The book offers readers a bidirectional (top-down and bottom-up) perspective on designing efficient bio-inspired hardware. At the nanodevice level, it focuses on various flavors of emerging resistive memory (RRAM) technology. At the algorithm level, it addresses optimized implementations of supervised and stochastic learning paradigms such as: spike-time-dependent plasticity (STDP), long-term potentiation (LTP), long-term depression (LTD), extreme learning machines (ELM) and early adoptions of restricted Boltzmann machines (RBM) to name a few. The contributions discuss system-level power/energy/parasitic trade-offs, and complex real-world applications. The book is suited for both advanced researchers and students interested in the field.

  19. A Hardware Framework for on-Chip FPGA Acceleration

    DEFF Research Database (Denmark)

    Lomuscio, Andrea; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2016-01-01

    In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accele......In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA......-based accelerator. Results show that significant speed-up can be obtained by the proposed acceleration framework on system-on-chips where reconfigurable fabric is placed next to the CPUs. The speed-up is due to both the intrinsic acceleration in the application-specific processors, and to the increased parallelism....

  20. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    Directory of Open Access Journals (Sweden)

    Andreas Stöckel

    2017-08-01

    Full Text Available Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP. Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output.

  1. Hardware accuracy counters for application precision and quality feedback

    Science.gov (United States)

    de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani; Huang, Wei; Arora, Manish; Greathouse, Joseph L.

    2018-06-05

    Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be output to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.

  2. Design Tools for Reconfigurable Hardware in Orbit (RHinO)

    Science.gov (United States)

    French, Mathew; Graham, Paul; Wirthlin, Michael; Larchev, Gregory; Bellows, Peter; Schott, Brian

    2004-01-01

    The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools that facilitate and automate design techniques for reconfigurable computing in space, using SRAM-based field-programmable-gate-array (FPGA) technology. These tools leverage an established FPGA design environment and focus primarily on space effects mitigation and power optimization. The project is creating software to automatically test and evaluate the single-event-upsets (SEUs) sensitivities of an FPGA design and insert mitigation techniques. Extensions into the tool suite will also allow evolvable algorithm techniques to reconfigure around single-event-latchup (SEL) events. In the power domain, tools are being created for dynamic power visualiization and optimization. Thus, this technology seeks to enable the use of Reconfigurable Hardware in Orbit, via an integrated design tool-suite aiming to reduce risk, cost, and design time of multimission reconfigurable space processors using SRAM-based FPGAs.

  3. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  4. BCI meeting 2005--workshop on technology: hardware and software.

    Science.gov (United States)

    Cincotti, Febo; Bianchi, Luigi; Birch, Gary; Guger, Christoph; Mellinger, Jürgen; Scherer, Reinhold; Schmidt, Robert N; Yáñez Suárez, Oscar; Schalk, Gerwin

    2006-06-01

    This paper describes the outcome of discussions held during the Third International BCI Meeting at a workshop to review and evaluate the current state of BCI-related hardware and software. Technical requirements and current technologies, standardization procedures and future trends are covered. The main conclusion was recognition of the need to focus technical requirements on the users' needs and the need for consistent standards in BCI research.

  5. Optimizing main-memory join on modern hardware

    OpenAIRE

    Boncz, Peter; Manegold, Stefan; Kersten, Martin

    2002-01-01

    textabstractIn the past decade, the exponential growth in commodity CPUs speed has far outpaced advances in memory latency. A second trend is that CPU performance advances are not only brought by increased clock rate, but also by increasing parallelism inside the CPU. Current database systems have not yet adapted to these trends, and show poor utilization of both CPU and memory resources on current hardware. In this article, we show how these resources can be optimized for large joins and tra...

  6. Parallel-Architecture Simulator Development Using Hardware Transactional Memory

    OpenAIRE

    Armejach Sanosa, Adrià

    2009-01-01

    To address the need for a simpler parallel programming model, Transactional Memory (TM) has been developed and promises good parallel performance with easy-to-write parallel code. Unlike lock-based approaches, with TM, programmers do not need to explicitly specify and manage the synchronization among threads. However, programmers simply mark code segments as transactions, and the TM system manages the concurrency control for them. TM can be implemented either in software (STM) or hardware (HT...

  7. S-1 project. Volume II. Hardware. 1979 annual report

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    This volume includes highlights of the design of the Mark IIA uniprocessor (SMI-2), and the SCALD II user's manual. SCALD (structured computer-aided logic design system) cuts the cost and time required to design logic by letting the logic designer express ideas as naturally as possible, and by eliminating as many errors as possible - through consistency checking, simulation, and timing verification - before the hardware is built. (GHT)

  8. Generation of embedded Hardware/Software from SystemC

    OpenAIRE

    Houzet , Dominique; Ouadjaout , Salim

    2006-01-01

    International audience; Designers increasingly rely on reusing intellectual property (IP) and on raising the level of abstraction to respect system-on-chip (SoC) market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propo...

  9. Hardware realization of chaos based block cipher for image encryption

    KAUST Repository

    Barakat, Mohamed L.; Radwan, Ahmed G.; Salama, Khaled N.

    2011-01-01

    Unlike stream ciphers, block ciphers are very essential for parallel processing applications. In this paper, the first hardware realization of chaotic-based block cipher is proposed for image encryption applications. The proposed system is tested for known cryptanalysis attacks and for different block sizes. When implemented on Virtex-IV, system performance showed high throughput and utilized small area. Passing successfully in all tests, our system proved to be secure with all block sizes. © 2011 IEEE.

  10. Automatic Optimization of Hardware Accelerators for Image Processing

    OpenAIRE

    Reiche, Oliver; Häublein, Konrad; Reichenbach, Marc; Hannig, Frank; Teich, Jürgen; Fey, Dietmar

    2015-01-01

    In the domain of image processing, often real-time constraints are required. In particular, in safety-critical applications, such as X-ray computed tomography in medical imaging or advanced driver assistance systems in the automotive domain, timing is of utmost importance. A common approach to maintain real-time capabilities of compute-intensive applications is to offload those computations to dedicated accelerator hardware, such as Field Programmable Gate Arrays (FPGAs). Programming such arc...

  11. FY16 ISCP Nuclear Counting Facility Hardware Expansion Summary

    Energy Technology Data Exchange (ETDEWEB)

    Church, Jennifer A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kashgarian, Michaele [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wooddy, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Haslett, Bob [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Torretto, Phil [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-15

    Hardware expansion and detector calibrations were the focus of FY 16 ISCP efforts in the Nuclear Counting Facility. Work focused on four main objectives: 1) Installation, calibration, and validation of 4 additional HPGe gamma spectrometry systems; including two Low Energy Photon Spectrometers (LEPS). 2) Re-Calibration and validation of 3 previously installed gamma-ray detectors, 3) Integration of the new systems into the NCF IT infrastructure, and 4) QA/QC and maintenance of current detector systems.

  12. Introduction to hardware for nuclear medicine data systems

    International Nuclear Information System (INIS)

    Erickson, J.J.

    1976-01-01

    Hardware included in a computer-based data system for nuclear medicine imaging studies is discussed. The report is written for the newcomer to computer collection and analysis. Emphasis is placed on the effect of the various portions of the system on the final application in the nuclear medicine clinic. While an attempt is made to familiarize the user with some of the terms he will encounter, no attempt is made to make him a computer expert. 1 figure, 2 tables

  13. IDEAS and App Development Internship in Hardware and Software Design

    Science.gov (United States)

    Alrayes, Rabab D.

    2016-01-01

    In this report, I will discuss the tasks and projects I have completed while working as an electrical engineering intern during the spring semester of 2016 at NASA Kennedy Space Center. In the field of software development, I completed tasks for the G-O Caching Mobile App and the Asbestos Management Information System (AMIS) Web App. The G-O Caching Mobile App was written in HTML, CSS, and JavaScript on the Cordova framework, while the AMIS Web App is written in HTML, CSS, JavaScript, and C# on the AngularJS framework. My goals and objectives on these two projects were to produce an app with an eye-catching and intuitive User Interface (UI), which will attract more employees to participate; to produce a fully-tested, fully functional app which supports workforce engagement and exploration; to produce a fully-tested, fully functional web app that assists technicians working in asbestos management. I also worked in hardware development on the Integrated Display and Environmental Awareness System (IDEAS) wearable technology project. My tasks on this project were focused in PCB design and camera integration. My goals and objectives for this project were to successfully integrate fully functioning custom hardware extenders on the wearable technology headset to minimize the size of hardware on the smart glasses headset for maximum user comfort; to successfully integrate fully functioning camera onto the headset. By the end of this semester, I was able to successfully develop four extender boards to minimize hardware on the headset, and assisted in integrating a fully-functioning camera into the system.

  14. FY16 ISCP Nuclear Counting Facility Hardware Expansion Summary

    International Nuclear Information System (INIS)

    Church, Jennifer A.; Kashgarian, Michaele; Wooddy, Todd; Haslett, Bob; Torretto, Phil

    2016-01-01

    Hardware expansion and detector calibrations were the focus of FY 16 ISCP efforts in the Nuclear Counting Facility. Work focused on four main objectives: 1) Installation, calibration, and validation of 4 additional HPGe gamma spectrometry systems; including two Low Energy Photon Spectrometers (LEPS). 2) Re-Calibration and validation of 3 previously installed gamma-ray detectors, 3) Integration of the new systems into the NCF IT infrastructure, and 4) QA/QC and maintenance of current detector systems.

  15. Treatment alternatives for non-fuel-bearing hardware

    International Nuclear Information System (INIS)

    Ross, W.A.; Clark, L.L.; Oma, K.H.

    1987-01-01

    This evaluation compared four alternatives for the treatment or processing of non-fuel bearing hardware (NFBH) to reduce its volume and prepare it for disposal. These treatment alternatives are: shredding; shredding and low pressure compaction; shredding and supercompaction; and melting. These alternatives are compared on the basis of system costs, waste form characteristics, and process considerations. The study recommends that melting and supercompaction alternatives be further considered and that additional testing be conducted for these two alternatives

  16. Peculiarities of hardware implementation of generalized cellular tetra automaton

    OpenAIRE

    Аноприенко, Александр Яковлевич; Федоров, Евгений Евгениевич; Иваница, Сергей Васильевич; Альрабаба, Хамза

    2015-01-01

    Cellular automata are widely used in many fields of knowledge for the study of variety of complex real processes: computer engineering and computer science, cryptography, mathematics, physics, chemistry, ecology, biology, medicine, epidemiology, geology, architecture, sociology, theory of neural networks. Thus, cellular automata (CA) and tetra automata are gaining relevance taking into account the hardware and software solutions.Also it is marked a trend towards an increase in the number of p...

  17. Hardware realization of chaos based block cipher for image encryption

    KAUST Repository

    Barakat, Mohamed L.

    2011-12-01

    Unlike stream ciphers, block ciphers are very essential for parallel processing applications. In this paper, the first hardware realization of chaotic-based block cipher is proposed for image encryption applications. The proposed system is tested for known cryptanalysis attacks and for different block sizes. When implemented on Virtex-IV, system performance showed high throughput and utilized small area. Passing successfully in all tests, our system proved to be secure with all block sizes. © 2011 IEEE.

  18. Automation Hardware & Software for the STELLA Robotic Telescope

    Science.gov (United States)

    Weber, M.; Granzer, Th.; Strassmeier, K. G.

    The STELLA telescope (a joint project of the AIP, Hamburger Sternwarte and the IAC) is to operate in fully robotic mode, with no human interaction necessary for regular operation. Thus, the hardware must be kept as simple as possible to avoid unnecessary failures, and the environmental conditions must be monitored accurately to protect the telescope in case of bad weather. All computers are standard PCs running Linux, and communication with specialized hardware is done via a RS232/RS485 bus system. The high level (java based) control software consists of independent modules to ease bug-tracking and to allow the system to be extended without changing existing modules. Any command cycle consists of three messages, the actual command sent from the central node to the operating device, an immediate acknowledge, and a final done message, both sent back from the receiving device to the central node. This reply-splitting allows a direct distinction between communication problems (no acknowledge message) and hardware problems (no or a delayed done message). To avoid bug-prone packing of all the sensor-analyzing software into a single package, each sensor-reading and interaction with other sensors is done within a self-contained thread. Weather-decision making is therefore totally decoupled from the core control software to avoid dead-locks in the core module.

  19. Optimized design of embedded DSP system hardware supporting complex algorithms

    Science.gov (United States)

    Li, Yanhua; Wang, Xiangjun; Zhou, Xinling

    2003-09-01

    The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.

  20. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    Science.gov (United States)

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  1. Hardware demonstration of high-speed networks for satellite applications.

    Energy Technology Data Exchange (ETDEWEB)

    Donaldson, Jonathon W.; Lee, David S.

    2008-09-01

    This report documents the implementation results of a hardware demonstration utilizing the Serial RapidIO{trademark} and SpaceWire protocols that was funded by Sandia National Laboratories (SNL's) Laboratory Directed Research and Development (LDRD) office. This demonstration was one of the activities in the Modeling and Design of High-Speed Networks for Satellite Applications LDRD. This effort has demonstrated the transport of application layer packets across both RapidIO and SpaceWire networks to a common downlink destination using small topologies comprised of commercial-off-the-shelf and custom devices. The RapidFET and NEX-SRIO debug and verification tools were instrumental in the successful implementation of the RapidIO hardware demonstration. The SpaceWire hardware demonstration successfully demonstrated the transfer and routing of application data packets between multiple nodes and also was able reprogram remote nodes using configuration bitfiles transmitted over the network, a key feature proposed in node-based architectures (NBAs). Although a much larger network (at least 18 to 27 nodes) would be required to fully verify the design for use in a real-world application, this demonstration has shown that both RapidIO and SpaceWire are capable of routing application packets across a network to a common downlink node, illustrating their potential use in real-world NBAs.

  2. Reconfigurable Signal Processing and Hardware Architecture for Broadband Wireless Communications

    Directory of Open Access Journals (Sweden)

    Liang Ying-Chang

    2005-01-01

    Full Text Available This paper proposes a broadband wireless transceiver which can be reconfigured to any type of cyclic-prefix (CP -based communication systems, including orthogonal frequency-division multiplexing (OFDM, single-carrier cyclic-prefix (SCCP system, multicarrier (MC code-division multiple access (MC-CDMA, MC direct-sequence CDMA (MC-DS-CDMA, CP-based CDMA (CP-CDMA, and CP-based direct-sequence CDMA (CP-DS-CDMA. A hardware platform is proposed and the reusable common blocks in such a transceiver are identified. The emphasis is on the equalizer design for mobile receivers. It is found that after block despreading operation, MC-DS-CDMA and CP-DS-CDMA have the same equalization blocks as OFDM and SCCP systems, respectively, therefore hardware and software sharing is possible for these systems. An attempt has also been made to map the functional reconfigurable transceiver onto the proposed hardware platform. The different functional entities which will be required to perform the reconfiguration and realize the transceiver are explained.

  3. Using Innovative Technologies for Manufacturing and Evaluating Rocket Engine Hardware

    Science.gov (United States)

    Betts, Erin M.; Hardin, Andy

    2011-01-01

    Many of the manufacturing and evaluation techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing and evaluating hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) and white light scanning are being adopted and evaluated for their use on J-2X, with hopes of employing both technologies on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powdered metal manufacturing process in order to produce complex part geometries. The white light technique is a non-invasive method that can be used to inspect for geometric feature alignment. Both the DMLS manufacturing method and the white light scanning technique have proven to be viable options for manufacturing and evaluating rocket engine hardware, and further development and use of these techniques is recommended.

  4. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  5. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  6. 2D neural hardware versus 3D biological ones

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper will present important limitations of hardware neural nets as opposed to biological neural nets (i.e. the real ones). The author starts by discussing neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural nets. Going further, the focus will be on hardware constraints. The author will present recent results for three different alternatives of implementing neural networks: digital, threshold gate, and analog, while the area and the delay will be related to neurons' fan-in and weights' precision. Based on all of these, it will be shown why hardware implementations cannot cope with their biological inspiration with respect to their power of computation: the mapping onto silicon lacking the third dimension of biological nets. This translates into reduced fan-in, and leads to reduced precision. The main conclusion is that one is faced with the following alternatives: (1) try to cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow one to use the third dimension, e.g. using optical interconnections.

  7. Rupture hardware minimization in pressurized water reactor piping

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Ski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.F.; Quinones, D.F.; Server, W.L.

    1989-01-01

    For much of the high-energy piping in light reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but also improves the overall safety and integrity of the plant since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied a Beaver Valley Power Station- Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferrutic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in. (152-mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel line as small as 3-in. (76-mm) diameter (outside containment) can qualify for pipe rupture hardware elemination

  8. Pipe rupture hardware minimization in pressurized water reactor system

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Szyslowski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.; Quinones, D.; Server, W.

    1987-01-01

    For much of the high energy piping in light water reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but the overall safety and integrity of the plant are improved since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied at Beaver Valley Power Station - Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferritic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in (152 mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel lines as small as 3-in (76 mm) diameter (outside containment) can qualify for pipe rupture hardware elimination

  9. Secure Hardware Performance Analysis in Virtualized Cloud Environment

    Directory of Open Access Journals (Sweden)

    Chee-Heng Tan

    2013-01-01

    Full Text Available The main obstacle in mass adoption of cloud computing for database operations is the data security issue. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to real data for diagnostic and remediation purposes. The proposed mechanisms utilized TPC-H benchmark to achieve 2 objectives. First, the underlying hardware performance and consistency is supervised via a control system, which is constructed using a combination of TPC-H queries, linear regression, and machine learning techniques. Second, linear programming techniques are employed to provide input to the algorithms that construct stress-testing scenarios in the virtual machine, using the combination of TPC-H queries. These stress-testing scenarios serve 2 purposes. They provide the boundary resource threshold verification to the first control system, so that periodic training of the synthetic data sets for performance evaluation is not constrained by hardware inadequacy, particularly when the resources in the virtual machine are scaled up or down which results in the change of the utilization threshold. Secondly, they provide a platform for response time verification on critical transactions, so that the expected Quality of Service (QoS from these transactions is assured.

  10. Hardware implementation of on -chip learning using re configurable FPGAS

    International Nuclear Information System (INIS)

    Kelash, H.M.; Sorour, H.S; Mahmoud, I.I.; Zaki, M; Haggag, S.S.

    2009-01-01

    The multilayer perceptron (MLP) is a neural network model that is being widely applied in the solving of diverse problems. A supervised training is necessary before the use of the neural network.A highly popular learning algorithm called back-propagation is used to train this neural network model. Once trained, the MLP can be used to solve classification problems. An interesting method to increase the performance of the model is by using hardware implementations. The hardware can do the arithmetical operations much faster than software. In this paper, a design and implementation of the sequential mode (stochastic mode) of backpropagation algorithm with on-chip learning using field programmable gate arrays (FPGA) is presented, a pipelined adaptation of the on-line back propagation algorithm (BP) is shown.The hardware implementation of forward stage, backward stage and update weight of backpropagation algorithm is also presented. This implementation is based on a SIMD parallel architecture of the forward propagation the diagnosis of the multi-purpose research reactor of Egypt accidents is used to test the proposed system

  11. Estudio del comportamiento mecánico de un sistema recubierto mediante simulación computacional del ensayo de rayado//Mechanical Behavior study of a coated system by computer simulation of the scratch test

    Directory of Open Access Journals (Sweden)

    Eduardo A. Pérez Ruiz

    2015-05-01

    Full Text Available Una forma de evaluar un sistema recubierto es a través del ensayo de rayado. Los resultados obtenidos dependen de variables como: propiedades y geometría del indentador, tasa de carga, tasa de desplazamiento, propiedades de los materiales del sistema a evaluar como dureza, módulo elástico, microestructura, rugosidad superficial, espesor, entre otras. El presente trabajo analizó, a través de simulación computacional del ensayo de rayado, el efecto que tiene la geometría del indentador (cónica y esférica, la carga de rayado (20 N y 50 N, el espesor del recubrimiento (2,1 µm y 4,6 µm y el coeficiente de fricción (0,3 y 0,5 en el comportamiento de los esfuerzos y la deformación plástica en la superficie de un sistema recubierto. Los resultados sugieren que el coeficiente de fricción como variable de ensayo tiene una alta importancia en el comportamiento mecánico del sistema recubierto.Palabras claves: ensayo de rayado, simulación computacional, sistema recubierto.______________________________________________________________________________AbstractOne way to evaluate a coated system is through the scratch test. The results obtained depend of the variables including mechanical properties and geometry of indenter, loading, displacement, material properties in the system as hardness, elastic modulus, microstructure, roughness surface, thickness, among others, which are indicated in ASTM C1624 / 05. This paper analyzes through scratch test simulation, the effect of the indenter geometry (conical and spherical, the loading (20 N and 50 N, the thickness coating (2,1 µm and 4,6 µm and the friction coefficient values (0,3 and 0,5 in the stresses and plastic deformation behavior at the surface of a coated system. The results suggest that the coefficient of friction has a high importance in the mechanical performance of the coated system.Key words: scratch test, computacional simulation, coated system.

  12. ESTUDIO DEL EFECTO DE ISOTÓPO DE HIDRÓGENO EN LOS COMPLEJOS M–H•••H–F (M=Li, Na

    Directory of Open Access Journals (Sweden)

    Andrés Reyes

    2009-06-01

    Full Text Available Se estudió teóricamente el efecto de isotópo de hidrógeno sobre la geometría, la distribución de carga electrónica, la estabilidad relativa y la energía de formación de complejos lineales tipo M–X···Y–F y todos sus isotopólogos de hidrógeno (M=Li, Na; X, Y= H, D, T. Estos estudios fueron realizados con el paquete computacional APMO a un nivel de teoría Hartree-Fock electrónico y nuclear. Los resultados obtenidos están de acuerdo con resultados reportados por otros autores que usan métodos de estructura electrónica convencional.  

  13. O uso das novas tecnologias da informação e comunicação no ensino de física : uma abordagem através da modelagem computacional

    OpenAIRE

    Marcelo Esteves de Andrade

    2010-01-01

    Neste projeto, utilizamos algumas tecnologias da informação e comunicação para desenvolver uma estratégia de ensino de Física para o Ensino Médio abordando o tópico da cinemática. A estratégia contou com a utilização de aulas e testes virtuais e também com atividades de modelagem computacional com o programa Modellus. Todas as atividades foram aplicadas no laboratório de informática e mediadas pelo computador. Esta estratégia foi aplicada em duas turmas do primeiro ano do Ensino Médio do Inst...

  14. Načrtovan porod na domu

    OpenAIRE

    Todorović, Tamara; Takač, Iztok

    2017-01-01

    Izhodišča: Porod na domu je sicer star toliko kot človeštvo, pa vendar v veliki večini srednje in visoko razvitih držav prevladuje mnenje, da so zaradi nepredvidljivosti zapletov porodnišnice najbolj varno okolje za rojevanje. Kljub temu obstaja peščica držav, v katerih je porod na domu integriran v sistem zdravstvenega varstva (npr. Nizozemska, Velika Britanija, Kanada). Pri porodih na domu ločimo nenačrtovane in načrtovane porode na domu, slednje pa lahko nadalje razdelimo še na porode s sp...

  15. Modelo computacional para análise do desempenho de um processo semicontínuo de distribuição de gás Linz-Donawitz Computational model for the performance analysis of a semi-continuous process of Linz-Donawitz gas distribution

    Directory of Open Access Journals (Sweden)

    Cristina Weber Ambrósio

    2010-01-01

    Full Text Available A distribuição de gás Linz-Donawitz (LDG, coproduto do processo de produção do aço, para centrais termoelétricas, permite que a energia térmica recuperada seja convertida em energia elétrica, proporcionando benefícios econômicos e ambientais. Este artigo apresenta um modelo computacional baseado em simulação de eventos discretos para investigar o desempenho atual e em futuras expansões do processo semicontínuo de distribuição de gás LDG em uma siderúrgica. A simulação de diferentes cenários indicou que o aumento da demanda de gás recuperado é uma alternativa possível e economicamente viável. Uma redução de 66% na perda de gás é esperada com a inclusão de um novo consumidor que eleve a demanda do gás em 30%. Esta alternativa melhora também a flexibilidade do sistema em situações nas quais o principal consumidor falha ou precisa de uma parada para manutenção.The distribution of Linz-Donawitz gas (LDG, co-product of steelmaking process, among thermoelectric plants enables the conversion of the thermal energy recovered into electric energy providing economic and environmental profits. This paper presents a computational model, based on the discrete event simulation in order to investigate current performance of the semi-continuous gas distribution process in a steelmaking plant as well as its future performances due to production expansion. Different scenarios simulation indicated that an increase in the recovered gas demand is a possible and economically feasible alternative. A reduction of 66% in gas loss is expected with the inclusion of a new consumer that can increase gas demand by 30%. This alternative also improves the system flexibility in situations when the main consumer fails or needs a break or a maintenance interval.

  16. Computational program to design heat pumps by compression (ciclo 1.0); Programa computacional para diseno de bombas de calor por compresion (ciclo 1.0)

    Energy Technology Data Exchange (ETDEWEB)

    De Alba Rosano, Mauricio [CIE, UNAM, Temixco, Morelos (Mexico)

    2000-07-01

    A new computational program has been developed in order to design single stage compression heat pumps. This software, named CICLO 1.0 allows the design of water-water, water-air, air-water and air-air heat pumps, for industrial and residential applications. CICLO 1.0 simulates three types of compressors: reciprocating, screw and scroll. Also has a data base created with REFPROP software which includes eleven refrigerants. The condenser and evaporator simulation includes global conductance (UA) determination, and when one or both are shell and tube's type, this software shows the even number of tube passes by shell. The software determines the best compressor and refrigerant setup taking the COP as a parameter; in order to obtain this, is necessary to know the inlet/outlet conditions of the fluid to be heated, the inlet conditions of the fluid that gives heat, and the electric motor efficiency that drives the compressor. The afforded results by CICLO 1.0 are: operation conditions from compression cycle, that means, pressures and temperatures at the inlet/outlet from every heat pump component are determined: as well as refrigerant mass flux, COP, power required by compressor, volumetric and isentropic efficiencies, heat exchangers global conductance and more data. CICLO 1.0 has been executed with heat pump data that nowadays are operating, and the results from the simulation have been very similar each other with data reported from operational facilities. [Spanish] Se ha desarrollado un nuevo programa computacional para el diseno de bombas de calor por compresion de vapor de una sola etapa. Este programa, CICLO 1.0, permite el diseno de bombas de calor de tipo: agua-agua, agua-aire, aire-agua y aire-aire, que se utilicen para aplicaciones industriales, de servicios y residenciales. CICLO 1.0 simula tres tipos de compresores: reciprocante, de tornillo y scroll: cuenta con una base de datos de refrigerantes creada con el programa REFPROP la cual incluye once

  17. Embodiment in cognitive linguistic: from experientialism to computational neuroscience Corporeidade em linguística cognitiva: do experiencialismo à neurociência computacional

    Directory of Open Access Journals (Sweden)

    Heloísa Pedroso de Moraes Feltes

    2010-01-01

    cognitivo. A teoria, nesses termos, modela como os seres humanos constroem e processam estruturas de conhecimento que regulam sua vida individual e coletiva. Em seguida, discute-se a Teoria Neural da Linguagem, em que a corporeidade é reconstruída a partir de um paradigma de cinco níveis, em que o conexionismo estruturado carrega o peso da descrição e explanação computacional. Em vista disso, retomam-se problemas clássicos sobre implementações computacionais para modelos de funcionamento da linguagem humana, em abordagens reducionistas-fisicalistas. Como conclusão, defende-se que a corporificação, como fenômeno de investigação, que se problematize sua formulação em termos de níveis, tratando-a, em vez disso, como interfaces, de modo que: (a os compromissos epistemológicos deveriam ser sincronicamente mantidos nas interfaces; (b em função de (a, o nível das computações deveria ser tomado como um dos problemas a serem tratados no plano do conexionismo estruturado; (c a estratégia de um paradigma em níveis de redução e os resultados obtidos a partir dela implicam uma espécie de modularização do programa de pesquisa; e (d esses módulos seriam interdependentes apenas em vista dos níveis criados para atender a determinados objetivos. Como resultado, assume-se que é possível fazer-se Lingüística Cognitiva sem aderir ao conexionismo estruturado ou à simulação neurocomputacional, desde que se operaria com construções de interfaces entre domínios de investigação, e não com um paradigma em níveis, com traços reducionistas.

  18. Computational package for the dynamic analysis of synchronous generators and their controls; Paquete computacional para el analisis de generadores sincronos y sus controles

    Energy Technology Data Exchange (ETDEWEB)

    Perez Guillen, Jesus Artemio

    1997-12-31

    This thesis presents a computational package for the dynamic analysis of synchronous generators and their controls in a machine - infinite bus system. The package is integrated by a graphic interface for Windows environment and several models for the different components of the generation system. The development of the graphic interface was carried out with object oriented programming under Windows environment, available from Borland C++, which generates a group of menus that integrates an environment of interactive and versatile simulation. The package contains mathematical models of third, fourth, fifth and sixth order for synchronous generators of round and salient poles. Several mathematical models for the excitation systems DC1A, AC1A and ST1A, according to the IEEE classification, are included. Models for thermal and hydraulic turbines with governor of speed are also included, as well as a mathematical model for the power system stabilizer and magnetic saturation on synchronous generators. Numerical methods like Euler, Modified Euler and Runge Kutta of second and fourth order are used to solve the characteristics differential equations of the system under study. Algorithms for graphic generation includes phasor diagram, capability and saturation curves for synchronous machine. Computer models are validated and sensitivity analysis is carried out in order to assess the ef ect of type of model for synchronous machine, excitation systems, power system stabilizer, magnetic saturation in the synchronous generator and different numerical methods of integration. The computational package is useful in teaching and research on the dynamic response of synchronous machines and their controls. [Espanol] En este trabajo se presenta el desarrollo de un paquete computacional para el analisis dinamico de generadores sincronos y sus controles en el esquema de una unidad de generacion - bus infinito. El paquete esta integrado por una interfaz grafica para ambiente Windows y un

  19. Radioisotope thermoelectric generator licensed hardware package and certification tests

    International Nuclear Information System (INIS)

    Goldmann, L.H.; Averette, H.S.

    1994-01-01

    This paper presents the Licensed Hardware package and the Certification Test portions of the Radioisotope Thermoelectric Generator Transportation System. This package has been designed to meet those portions of the Code of Federal Regulations (10 CFR 71) relating to ''Type B'' shipments of radioactive materials. The detailed information for the anticipated license is presented in the safety analysis report for packaging, which is now in process and undergoing necessary reviews. As part of the licensing process, a full-size Certification Test Article unit, which has modifications slightly different than the Licensed Hardware or production shipping units, is used for testing. Dimensional checks of the Certification Test Article were made at the manufacturing facility. Leak testing and drop testing were done at the 300 Area of the US Department of Energy's Hanford Site near Richland, Washington. The hardware includes independent double containments to prevent the environmental spread of 238 Pu, impact limiting devices to protect portions of the package from impacts, and thermal insulation to protect the seal areas from excess heat during accident conditions. The package also features electronic feed-throughs to monitor the Radioisotope Thermoelectric Generator's temperature inside the containment during the shipment cycle. This package is designed to safely dissipate the typical 4500 thermal watts produced in the largest Radioisotope Thermoelectric Generators. The package also contains provisions to ensure leak tightness when radioactive materials, such as a Radioisotope Thermoelectric Generator for the Cassini Mission, planned for 1997 by the National Aeronautics and Space Administration, are being prepared for shipment. These provisions include test ports used in conjunction with helium mass spectrometers to determine seal leakage rates of each containment during the assembly process

  20. Multi-User Hardware Solutions to Combustion Science ISS Research

    Science.gov (United States)

    Otero, Angel M.

    2001-01-01

    In response to the budget environment and to expand on the International Space Station (ISS) Fluids and Combustion Facility (FCF) Combustion Integrated Rack (CIR), common hardware approach, the NASA Combustion Science Program shifted focus in 1999 from single investigator PI (Principal Investigator)-specific hardware to multi-user 'Minifacilities'. These mini-facilities would take the CIR common hardware philosophy to the next level. The approach that was developed re-arranged all the investigations in the program into sub-fields of research. Then common requirements within these subfields were used to develop a common system that would then be complemented by a few PI-specific components. The sub-fields of research selected were droplet combustion, solids and fire safety, and gaseous fuels. From these research areas three mini-facilities have sprung: the Multi-user Droplet Combustion Apparatus (MDCA) for droplet research, Flow Enclosure for Novel Investigations in Combustion of Solids (FEANICS) for solids and fire safety, and the Multi-user Gaseous Fuels Apparatus (MGFA) for gaseous fuels. These mini-facilities will develop common Chamber Insert Assemblies (CIA) and diagnostics for the respective investigators complementing the capability provided by CIR. Presently there are four investigators for MDCA, six for FEANICS, and four for MGFA. The goal of these multi-user facilities is to drive the cost per PI down after the initial development investment is made. Each of these mini-facilities will become a fixture of future Combustion Science NASA Research Announcements (NRAs), enabling investigators to propose against an existing capability. Additionally, an investigation is provided the opportunity to enhance the existing capability to bridge the gap between the capability and their specific science requirements. This multi-user development approach will enable the Combustion Science Program to drive cost per investigation down while drastically reducing the time

  1. Using Innovative Technologies for Manufacturing Rocket Engine Hardware

    Science.gov (United States)

    Betts, E. M.; Eddleman, D. E.; Reynolds, D. C.; Hardin, N. A.

    2011-01-01

    Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As the United States enters into the next space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, rapid manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on NASA s Space Launch System (SLS) upper stage engine, J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator (GG) discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using a workhorse gas generator (WHGG) test fixture at MSFC's East Test Area, the duct was subjected to extreme J-2X hot gas environments during 7 tests for a total of 537 seconds of hot-fire time. The duct underwent extensive post-test evaluation and showed no signs of degradation. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.

  2. Reconfigurable ATCA hardware for plasma control and data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, B.B., E-mail: bernardo@ipfn.ist.utl.p [Associacao EURATOM/IST Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal); Batista, A.J.N.; Correia, M.; Neto, A.; Fernandes, H.; Goncalves, B.; Sousa, J. [Associacao EURATOM/IST Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)

    2010-07-15

    The IST/EURATOM Association is developing a new generation of control and data acquisition hardware for fusion experiments based on the ATCA architecture. This emerging open standard offers a significantly higher data throughput over a reliable High Availability (HA) mechanical and electrical platform. One of this ATCA boards has 32 galvanically isolated ADC channels (18 bit) each mounted on a swappable plug-in card, 8 DAC channels (16 bit), 8 digital I/O channels and embeds a high performance XILINX Virtex 4 family field programmable gate array (FPGA). The specific modular and configurable hardware design enables adaptable utilization of the board in dissimilar applications. The first configuration, specially developed for tokamak plasma Vertical Stabilization, consists of a Multiple-Input-Multiple-Output (MIMO) controller that is capable of feedback loops faster than 1 ms using a multitude of input signals fed from different boards communicating through the Aurora{sup TM} point-to-point protocol. Massive parallel algorithms can be implemented on the FPGA either with programmed digital logic, using a HDL hardware description language, or within its internal silicon PowerPC{sup TM} running a full fledged real-time operating system. The second board configuration is dedicated for transient recording of the entire 32 channels at 2 MSamples/s to the on-board 512 MB DDR2 memory. Signal data retrieval is accelerated by a DMA-driven PCI Express{sup TM} x1 Interface to the ATCA system controller, providing an overall throughput in excess of 100 MB/s. This paper illustrates these developments and discusses possible configurations for foreseen applications.

  3. Using Innovative Techniques for Manufacturing Rocket Engine Hardware

    Science.gov (United States)

    Betts, Erin M.; Reynolds, David C.; Eddleman, David E.; Hardin, Andy

    2011-01-01

    Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using the Workhorse Gas Generator (WHGG) test setup at MSFC?s East Test Area test stand 116, the duct was subject to extreme J-2X gas generator environments and endured a total of 538 seconds of hot-fire time. The duct survived the testing and was inspected after the test. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.

  4. Proof-Carrying Hardware: Concept and Prototype Tool Flow for Online Verification

    OpenAIRE

    Drzevitzky, Stephanie; Kastens, Uwe; Platzner, Marco

    2010-01-01

    Dynamically reconfigurable hardware combines hardware performance with software-like flexibility and finds increasing use in networked systems. The capability to load hardware modules at runtime provides these systems with an unparalleled degree of adaptivity but at the same time poses new challenges for security and safety. In this paper, we elaborate on the presentation of proof carrying hardware (PCH) as a novel approach to reconfigurable system security. PCH takes ...

  5. Combining high productivity with high performance on commodity hardware

    DEFF Research Database (Denmark)

    Skovhede, Kenneth

    -like compiler for translating CIL bytecode on the CELL-BE. I then introduce a bytecode converter that transforms simple loops in Java bytecode to GPGPU capable code. I then introduce the numeric library for the Common Intermediate Language, NumCIL. I can then utilizing the vector programming model from Num......CIL and map this to the Bohrium framework. The result is a complete system that gives the user a choice of high-level languages with no explicit parallelism, yet seamlessly performs efficient execution on a number of hardware setups....

  6. Hardware support for software controlled fast reconfiguration of performance counters

    Science.gov (United States)

    Salapura, Valentina; Wisniewski, Robert W.

    2013-06-18

    Hardware support for software controlled reconfiguration of performance counters may include a plurality of performance counters collecting one or more counts of one or more selected activities. A storage element stores data value representing a time interval, and a timer element reads the data value and detects expiration of the time interval based on the data value and generates a signal. A plurality of configuration registers stores a set of performance counter configurations. A state machine receives the signal and selects a configuration register from the plurality of configuration registers for reconfiguring the one or more performance counters.

  7. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Directory of Open Access Journals (Sweden)

    Carvalho Paulo F.

    2018-01-01

    Full Text Available Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak. These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees. Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA® standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®, to meet the demands of telecommunications that require large amount of data (TB transportation at high transfer rates (Gb/s, to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency

  8. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Science.gov (United States)

    Carvalho, Paulo F.; Santos, Bruno; Correia, Miguel; Combo, Álvaro M.; Rodrigues, AntÓnio P.; Pereira, Rita C.; Fernandes, Ana; Cruz, Nuno; Sousa, Jorge; Carvalho, Bernardo B.; Batista, AntÓnio J. N.; Correia, Carlos M. B. A.; Gonçalves, Bruno

    2018-01-01

    Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak). These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees). Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA®) standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®), to meet the demands of telecommunications that require large amount of data (TB) transportation at high transfer rates (Gb/s), to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency scenarios

  9. Graph based communication analysis for hardware/software codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1999-01-01

    In this paper we present a coarse grain CDFG (Control/Data Flow Graph) model suitable for hardware/software partitioning of single processes and demonstrate how it is necessary to perform various transformations on the graph structure before partitioning in order to achieve a structure that allows...... for accurate estimation of communication overhead between nodes mapped to different processors. In particular, we demonstrate how various transformations of control structures can lead to a more accurate communication analysis and more efficient implementations. The purpose of the transformations is to obtain...

  10. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2009-01-01

    The classic textbook for computer systems analysis and design, Computer Organization and Design, has been thoroughly updated to provide a new focus on the revolutionary change taking place in industry today: the switch from uniprocessor to multicore microprocessors. This new emphasis on parallelism is supported by updates reflecting the newest technologies with examples highlighting the latest processor designs, benchmarking standards, languages and tools. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, compu

  11. Integración continua para open hardware

    OpenAIRE

    Peral Chico, David del

    2012-01-01

    En estos últimos años, la informática, y más concretamente, el hardware, está evolucionando hacia sistemas empotrados. La aparición de nuevos mercados como los micrordenadores, televisiones inteligentes, etc., y masificación de los existentes como los teléfonos inteligentes y las tablets amplifica este fenómeno. Esto es debido a las ventajas de dichos sistemas en cuanto a coste a escala, optimización y rendimiento, consumo de energía o tamaño, entre otras. Los sistemas empotrados crecen en so...

  12. Hardware and software constructs for a vibration analysis network

    International Nuclear Information System (INIS)

    Cook, S.A.; Crowe, R.D.; Toffer, H.

    1985-01-01

    Vibration level monitoring and analysis has been initiated at N Reactor, the dual purpose reactor operated at Hanford, Washington by UNC Nuclear Industries (UNC) for the Department of Energy (DOE). The machinery to be monitored was located in several buildings scattered over the plant site, necessitating an approach using satellite stations to collect, monitor and temporarily store data. The satellite stations are, in turn, linked to a centralized processing computer for further analysis. The advantages of a networked data analysis system are discussed in this paper along with the hardware and software required to implement such a system

  13. Crear dispositivo para personas sordas (plataforma hardware Arduino)

    OpenAIRE

    Codina Barberà, Marc

    2013-01-01

    El trabajo expuesto en la presente memoria tiene como objetivo la creación de un prototipo de avisos para gente sorda. El sistema se encargará de facilitar la interactuación entre una persona con problemas auditivos y los señales sonoros que pueden hallarse en una casa. El prototipo se ha desarrollado a partir de la plataforma hardware Arduino, un Smartphone con sistema operativo Android y la tecnología de comunicaciones inalámbricas Bluetooth y ZigBee. El treball exposat en aquesta memòri...

  14. Fingerprint Sensors: Liveness Detection Issue and Hardware based Solutions

    Directory of Open Access Journals (Sweden)

    Shahzad Memon

    2012-01-01

    Full Text Available Securing an automated and unsupervised fingerprint recognition system is one of the most critical and challenging tasks in government and commercial applications. In these systems, the detection of liveness of a finger placed on a fingerprint sensor is a major issue that needs to be addressed in order to ensure the credibility of the system. The main focus of this paper is to review the existing fingerprint sensing technologies in terms of liveness detection and discusses hardware based ‘liveness detection’ techniques reported in the literature for automatic fingerprint biometrics.

  15. Benchmarking and Hardware-In-The-Loop Operation of a ...

    Science.gov (United States)

    Engine Performance evaluation in support of LD MTE. EPA used elements of its ALPHA model to apply hardware-in-the-loop (HIL) controls to the SKYACTIV engine test setup to better understand how the engine would operate in a chassis test after combined with future leading edge technologies, advanced high-efficiency transmission, reduced mass, and reduced roadload. Predict future vehicle performance with Atkinson engine. As part of its technology assessment for the upcoming midterm evaluation of the 2017-2025 LD vehicle GHG emissions regulation, EPA has been benchmarking engines and transmissions to generate inputs for use in its ALPHA model

  16. Technology Corner: Dating of Electronic Hardware for Prior Art Investigations

    Directory of Open Access Journals (Sweden)

    Sellam Ismail

    2012-03-01

    Full Text Available In many legal matters, specifically patent litigation, determining and authenticating the date of computer hardware or other electronic products or components is often key to establishing the item as legitimate evidence of prior art. Such evidence can be used to buttress claims of technologies available or of events transpiring by or at a particular date.In 1945, the Electronics Industry Association published a standard, EIA 476-A, standardized in the reference Source and Date Code Marking (Electronic Industries Association, 1988.(see PDF for full tech corner

  17. Computer, Network, Software, and Hardware Engineering with Applications

    CERN Document Server

    Schneidewind, Norman F

    2012-01-01

    There are many books on computers, networks, and software engineering but none that integrate the three with applications. Integration is important because, increasingly, software dominates the performance, reliability, maintainability, and availability of complex computer and systems. Books on software engineering typically portray software as if it exists in a vacuum with no relationship to the wider system. This is wrong because a system is more than software. It is comprised of people, organizations, processes, hardware, and software. All of these components must be considered in an integr

  18. Surface moisture measurement system hardware acceptance test procedure

    International Nuclear Information System (INIS)

    Ritter, G.A.

    1996-01-01

    The purpose of this acceptance test procedure is to verify that the mechanical and electrical features of the Surface Moisture Measurement System are operating as designed and that the unit is ready for field service. This procedure will be used in conjunction with a software acceptance test procedure, which addresses testing of software and electrical features not addressed in this document. Hardware testing will be performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. These systems were developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks

  19. Deployment Testing of the De-Orbit Sail Flight Hardware

    OpenAIRE

    Hillebrandt, Martin; Meyer, Sebastian; Zander, Martin; Hühne, Christian

    2015-01-01

    The paper describes the results of the deployment testing of the De-Orbit Sail flight hardware, a drag sail for de-orbiting applications, performed by DLR. It addresses in particular the deployment tests of the fullscale sail subsystem and deployment force tests performed on the boom deployment module. For the fullscale sail testing a gravity compensation device is used which is described in detail. It allows observations of the in-plane interaction of the booms with the sail membrane and the...

  20. Hardware Prototyping of Neural Network based Fetal Electrocardiogram Extraction

    Science.gov (United States)

    Hasan, M. A.; Reaz, M. B. I.

    2012-01-01

    The aim of this paper is to model the algorithm for Fetal ECG (FECG) extraction from composite abdominal ECG (AECG) using VHDL (Very High Speed Integrated Circuit Hardware Description Language) for FPGA (Field Programmable Gate Array) implementation. Artificial Neural Network that provides efficient and effective ways of separating FECG signal from composite AECG signal has been designed. The proposed method gives an accuracy of 93.7% for R-peak detection in FHR monitoring. The designed VHDL model is synthesized and fitted into Altera's Stratix II EP2S15F484C3 using the Quartus II version 8.0 Web Edition for FPGA implementation.

  1. Online Infrastructure in Supply Chain for Hardware Shops

    OpenAIRE

    Sørensen , Karl ,

    2014-01-01

    Part 4: Private Services; International audience; This article describes how the Scandinavian network communication system DATEX was used to build an online infrastructure in a retail chain of privately owned hardware shops and Do-It-Yourself (DIY) centers. The solution gave the staff in the shops the possibility to use EDP as early as in 1983. The Internet did not exist at the time. EDP was not part of the daily work in the shop and was for most employees something unknown that took place at...

  2. System for processing an encrypted instruction stream in hardware

    Science.gov (United States)

    Griswold, Richard L.; Nickless, William K.; Conrad, Ryan C.

    2016-04-12

    A system and method of processing an encrypted instruction stream in hardware is disclosed. Main memory stores the encrypted instruction stream and unencrypted data. A central processing unit (CPU) is operatively coupled to the main memory. A decryptor is operatively coupled to the main memory and located within the CPU. The decryptor decrypts the encrypted instruction stream upon receipt of an instruction fetch signal from a CPU core. Unencrypted data is passed through to the CPU core without decryption upon receipt of a data fetch signal.

  3. Hardware interface unit for control of shuttle RMS vibrations

    Science.gov (United States)

    Lindsay, Thomas S.; Hansen, Joseph M.; Manouchehri, Davoud; Forouhar, Kamran

    1994-01-01

    Vibration of the Shuttle Remote Manipulator System (RMS) increases the time for task completion and reduces task safety for manipulator-assisted operations. If the dynamics of the manipulator and the payload can be physically isolated, performance should improve. Rockwell has developed a self contained hardware unit which interfaces between a manipulator arm and payload. The End Point Control Unit (EPCU) is built and is being tested at Rockwell and at the Langley/Marshall Coupled, Multibody Spacecraft Control Research Facility in NASA's Marshall Space Flight Center in Huntsville, Alabama.

  4. Study of hardware implementations of fast tracking algorithms

    International Nuclear Information System (INIS)

    Song, Z.; Huang, G.; Wang, D.; Lentdecker, G. De; Dong, J.; Léonard, A.; Robert, F.; Yang, Y.

    2017-01-01

    Real-time track reconstruction at high event rates is a major challenge for future experiments in high energy physics. To perform pattern-recognition and track fitting, artificial retina or Hough transformation methods have been introduced in the field which have to be implemented in FPGA firmware. In this note we report on a case study of a possible FPGA hardware implementation approach of the retina algorithm based on a Floating-Point core. Detailed measurements with this algorithm are investigated. Retina performance and capabilities of the FPGA are discussed along with perspectives for further optimization and applications.

  5. SYNTHESIS OF INFORMATION SYSTEM FOR SMART HOUSE HARDWARE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Vikentyeva Olga Leonidovna

    2017-10-01

    Full Text Available Subject: smart house maintenance requires taking into account a number of factors: resource-saving, reduction of operational expenditures, safety enhancement, providing comfortable working and leisure conditions. Automation of the corresponding engineering systems of illumination, climate control, security as well as communication systems and networks via utilization of contemporary technologies (e.g., IoT - Internet of Things poses a significant challenge related to storage and processing of the overwhelmingly massive volume of data whose utilization extent is extremely low nowadays. Since a building’s lifespan is large enough and exceeds the lifespan of codes and standards that take into account the requirements of safety, comfort, energy saving, etc., it is necessary to consider management aspects in the context of rational use of large data at the stage of information modeling. Research objectives: increase the efficiency of managing the subsystems of smart buildings hardware on the basis of a web-based information system that has a flexible multi-level architecture with several control loops and an adaptation model. Materials and methods: since a smart house belongs to man-machine systems, the cybernetic approach is considered as the basic method for design and research of information management system. Instrumental research methods are represented by set-theoretical modelling, automata theory and architectural principles of organization of information management systems. Results: a flexible architecture of information system for management of smart house hardware subsystems has been synthesized. This architecture encompasses several levels: client level, application level and data level as well as three layers: presentation level, actuating device layer and analytics layer. The problem of growing volumes of information processed by realtime message controller is attended by employment of sensors and actuating mechanisms with configurable

  6. Acquisition of reliable vacuum hardware for large accelerator systems

    International Nuclear Information System (INIS)

    Welch, K.M.

    1996-01-01

    Credible and effective communications prove to be the major challenge in the acquisition of reliable vacuum hardware. Technical competence is necessary but not sufficient. We must effectively communicate with management, sponsoring agencies, project organizations, service groups, staff and with vendors. Most of Deming's 14 quality assurance tenets relate to creating an enlightened environment of good communications. All projects progress along six distinct, closely coupled, dynamic phases; all six phases are in a state of perpetual change. These phases and their elements are discussed, with emphasis given to the acquisition phase and its related vocabulary. (author)

  7. Hardware Architectures for the Correspondence Problem in Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Thomas Eide

    Method"has been developed in conjunction with the work on this thesis and has not previously been described. Also, during this project a combined image acquisition and compression board has been developed for a NASA sounding rocket. This circuit, a so-called Lightning Imager, is also described. Finally...... an optimized hardware architecture has been proposed in relation to the three matching methods mentioned above. Because of the cost required to physically implement and test the developed architecture, it has been decided todocument the performance of the architecture through theoretical proofs only....

  8. A hardware overview of the RHIC LLRF platform

    International Nuclear Information System (INIS)

    Hayes, T.; Smith, K.S.

    2011-01-01

    The RHIC Low Level RF (LLRF) platform is a flexible, modular system designed around a carrier board with six XMC daughter sites. The carrier board features a Xilinx FPGA with an embedded, hard core Power PC that is remotely reconfigurable. It serves as a front end computer (FEC) that interfaces with the RHIC control system. The carrier provides high speed serial data paths to each daughter site and between daughter sites as well as four generic external fiber optic links. It also distributes low noise clocks and serial data links to all daughter sites and monitors temperature, voltage and current. To date, two XMC cards have been designed: a four channel high speed ADC and a four channel high speed DAC. The new LLRF hardware was used to replace the old RHIC LLRF system for the 2009 run. For the 2010 run, the RHIC RF system operation was dramatically changed with the introduction of accelerating both beams in a new, common cavity instead of each ring having independent cavities. The flexibility of the new system was beneficial in allowing the low level system to be adapted to support this new configuration. This hardware was also used in 2009 to provide LLRF for the newly commissioned Electron Beam Ion Source.

  9. Health Maintenance System (HMS) Hardware Research, Design, and Collaboration

    Science.gov (United States)

    Gonzalez, Stefanie M.

    2010-01-01

    The Space Life Sciences division (SLSD) concentrates on optimizing a crew member's health. Developments are translated into innovative engineering solutions, research growth, and community awareness. This internship incorporates all those areas by targeting various projects. The main project focuses on integrating clinical and biomedical engineering principles to design, develop, and test new medical kits scheduled for launch in the Spring of 2011. Additionally, items will be tagged with Radio Frequency Interference Devices (RFID) to keep track of the inventory. The tags will then be tested to optimize Radio Frequency feed and feed placement. Research growth will occur with ground based experiments designed to measure calcium encrusted deposits in the International Space Station (ISS). The tests will assess the urine calcium levels with Portable Clinical Blood Analyzer (PCBA) technology. If effective then a model for urine calcium will be developed and expanded to microgravity environments. To support collaboration amongst the subdivisions of SLSD the architecture of the Crew Healthcare Systems (CHeCS) SharePoint site has been redesigned for maximum efficiency. Community collaboration has also been established with the University of Southern California, Dept. of Aeronautical Engineering and the Food and Drug Administration (FDA). Hardware disbursements will transpire within these communities to support planetary surface exploration and to serve as an educational tool demonstrating how ground based medicine influenced the technological development of space hardware.

  10. A Hardware Fast Tracker for the ATLAS trigger

    International Nuclear Information System (INIS)

    Asbah, N.

    2016-01-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 10 34 cm -2 · s -1 . After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 μs, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.

  11. Hardware for dynamic quantum computing experiments: Part I

    Science.gov (United States)

    Johnson, Blake; Ryan, Colm; Riste, Diego; Donovan, Brian; Ohki, Thomas

    Static, pre-defined control sequences routinely achieve high-fidelity operation on superconducting quantum processors. Efforts toward dynamic experiments depending on real-time information have mostly proceeded through hardware duplication and triggers, requiring a combinatorial explosion in the number of channels. We provide a hardware efficient solution to dynamic control with a complete platform of specialized FPGA-based control and readout electronics; these components enable arbitrary control flow, low-latency feedback and/or feedforward, and scale far beyond single-qubit control and measurement. We will introduce the BBN Arbitrary Pulse Sequencer 2 (APS2) control system and the X6 QDSP readout platform. The BBN APS2 features: a sequencer built around implementing short quantum gates, a sequence cache to allow long sequences with branching structures, subroutines for code re-use, and a trigger distribution module to capture and distribute steering information. The X6 QDSP features a single-stage DSP pipeline that combines demodulation with arbitrary integration kernels, and multiple taps to inspect data flow for debugging and calibration. We will show system performance when putting it all together, including a latency budget for feedforward operations. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office Contract No. W911NF-10-1-0324.

  12. Advances in flexible optrode hardware for use in cybernetic insects

    Science.gov (United States)

    Register, Joseph; Callahan, Dennis M.; Segura, Carlos; LeBlanc, John; Lissandrello, Charles; Kumar, Parshant; Salthouse, Christopher; Wheeler, Jesse

    2017-08-01

    Optogenetic manipulation is widely used to selectively excite and silence neurons in laboratory experiments. Recent efforts to miniaturize the components of optogenetic systems have enabled experiments on freely moving animals, but further miniaturization is required for freely flying insects. In particular, miniaturization of high channel-count optical waveguides are needed for high-resolution interfaces. Thin flexible waveguide arrays are needed to bend light around tight turns to access small anatomical targets. We present the design of lightweight miniaturized optogentic hardware and supporting electronics for the untethered steering of dragonfly flight. The system is designed to enable autonomous flight and includes processing, guidance sensors, solar power, and light stimulators. The system will weigh less than 200mg and be worn by the dragonfly as a backpack. The flexible implant has been designed to provide stimuli around nerves through micron scale apertures of adjacent neural tissue without the use of heavy hardware. We address the challenges of lightweight optogenetics and the development of high contrast polymer waveguides for this purpose.

  13. Development of the Sixty Watt Heat-Source hardware components

    International Nuclear Information System (INIS)

    McNeil, D.C.; Wyder, W.C.

    1995-01-01

    The Sixty Watt Heat Source is a nonvented heat source designed to provide 60 thermal watts of power. The unit incorporates a plutonium-238 fuel pellet encapsulated in a hot isostatically pressed General Purpose Heat Source (GPHS) iridium clad vent set. A molybdenum liner sleeve and support components isolate the fueled iridium clad from the T-111 strength member. This strength member serves as the pressure vessel and fulfills the impact and hydrostatic strength requirements. The shell is manufactured from Hastelloy S which prevents the internal components from being oxidized. Conventional drawing operations were used to simplify processing and utilize existing equipment. The deep drawing reqirements for the molybdenum, T-111, and Hastelloy S were developed from past heat source hardware fabrication experiences. This resulted in multiple step drawing processes with intermediate heat treatments between forming steps. The molybdenum processing included warm forming operations. This paper describes the fabrication of these components and the multiple draw tooling developed to produce hardware to the desired specifications. copyright 1995 American Institute of Physics

  14. MRI - From basic knowledge to advanced strategies: Hardware

    International Nuclear Information System (INIS)

    Carpenter, T.A.; Williams, E.J.

    1999-01-01

    There have been remarkable advances in the hardware used for nuclear magnetic resonance imaging scanners. These advances have enabled an extraordinary range of sophisticated magnetic resonance MR sequences to be performed routinely. This paper focuses on the following particular aspects: (a) Magnet system. Advances in magnet technology have allowed superconducting magnets which are low maintenance and have excellent homogeneity and very small stray field footprints. (b) Gradient system. Optimisation of gradient design has allowed gradient coils which provide excellent field for spatial encoding, have reduced diameter and have technology to minimise the effects of eddy currents. These coils can now routinely provide the strength and switching rate required by modern imaging methods. (c) Radio-frequency (RF) system. The advances in digital electronics can now provide RF electronics which have low noise characteristics, high accuracy and improved stability, which are all essential to the formation of excellent images. The use of surface coils has increased with the availability of phased-array systems, which are ideal for spinal work. (d) Computer system. The largest advance in technology has been in the supporting computer hardware which is now affordable, reliable and with performance to match the processing requirements demanded by present imaging sequences. (orig.)

  15. Hardware-Assisted System for Program Execution Security of SOC

    Directory of Open Access Journals (Sweden)

    Wang Xiang

    2016-01-01

    Full Text Available With the rapid development of embedded systems, the systems’ security has become more and more important. Most embedded systems are at the risk of series of software attacks, such as buffer overflow attack, Trojan virus. In addition, with the rapid growth in the number of embedded systems and wide application, followed embedded hardware attacks are also increasing. This paper presents a new hardware assisted security mechanism to protect the program’s code and data, monitoring its normal execution. The mechanism mainly monitors three types of information: the start/end address of the program of basic blocks; the lightweight hash value in basic blocks and address of the next basic block. These parameters are extracted through additional tools running on PC. The information will be stored in the security module. During normal program execution, the security module is designed to compare the real-time state of program with the information in the security module. If abnormal, it will trigger the appropriate security response, suspend the program and jump to the specified location. The module has been tested and validated on the SOPC with OR1200 processor. The experimental analysis shows that the proposed mechanism can defence a wide range of common software and physical attacks with low performance penalties and minimal overheads.

  16. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    Science.gov (United States)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  17. Spinal fusion-hardware construct: Basic concepts and imaging review

    Science.gov (United States)

    Nouh, Mohamed Ragab

    2012-01-01

    The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging. PMID:22761979

  18. Optimum SNR data compression in hardware using an Eigencoil array.

    Science.gov (United States)

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  19. The FTK: A Hardware Track Finder for the ATLAS Trigger

    CERN Document Server

    Alison, J; Anderson, J; Andreani, A; Andreazza, A; Annovi, A; Antonelli, M; Atkinson, M; Auerbach, B; Baines, J; Barberio, E; Beccherle, R; Beretta, M; Biesuz, N V; Blair, R; Blazey, G; Bogdan, M; Boveia, A; Britzger, D; Bryant, P; Burghgrave, B; Calderini, G; Cavaliere, V; Cavasinni, V; Chakraborty, D; Chang, P; Cheng, Y; Cipriani, R; Citraro, S; Citterio, M; Crescioli, F; Dell'Orso, M; Donati, S; Dondero, P; Drake, G; Gadomski, S; Gatta, M; Gentsos, C; Giannetti, P; Giulini, M; Gkaitatzis, S; Howarth, J W; Iizawa, T; Kapliy, A; Kasten, M; Kim, Y K; Kimura, N; Klimkovich, T; Kordas, K; Korikawa, T; Krizka, K; Kubota, T; Lanza, A; Lasagni, F; Liberali, V; Li, H L; Love, J; Luciano, P; Luongo, C; Magalotti, D; Melachrinos, C; Meroni, C; Mitani, T; Negri, A; Neroutsos, P; Neubauer, M; Nikolaidis, S; Okumura, Y; Pandini, C; Penning, B; Petridou, C; Piendibene, M; Proudfoot, J; Rados, P; Roda, C; Rossi, E; Sakurai, Y; Sampsonidis, D; Sampsonidou, D; Schmitt, S; Schoening, A; Shochet, M; Shojaii, S; Soltveit, H; Sotiropoulou, C L; Stabile, A; Tang, F; Testa, M; Tompkins, L; Vercesi, V; Villa, M; Volpi, G; Webster, J; Wu, X; Yorita, K; Yurkewicz, A; Zeng, J C; Zhang, J

    2014-01-01

    The ATLAS experiment trigger system is designed to reduce the event rate, at the LHC design luminosity of 1034 cm-2 s-1, from the nominal bunch crossing rate of 40 MHz to less than 1 kHz for permanent storage. During Run 1, the LHC has performed exceptionally well, routinely exceeding the design luminosity. From 2015 the LHC is due to operate with higher still luminosities. This will place a significant load on the High Level Trigger system, both due to the need for more sophisticated algorithms to reject background, and from the larger data volumes that will need to be processed. The Fast TracKer is a hardware upgrade for Run 2, consisting of a custom electronics system that will operate at the full rate for Level-1 accepted events of 100 kHz and provide high quality tracks at the beginning of processing in the High Level Trigger. This will perform track reconstruction using hardware with massive parallelism using associative memories and FPGAs. The availability of the full tracking information will enable r...

  20. LISA Pathfinder: hardware tests and their input to the mission

    Science.gov (United States)

    Audley, Heather

    The Laser Interferometer Space Antenna (LISA) is a joint ESA-NASA mission for the first space-borne gravitational wave detector. LISA aims to detect sources in the 0.1mHz to 1Hz range, which include supermassive black holes and galactic binary stars. Core technologies required for the LISA mission, including drag-free test mass control, picometre interferometry and micro-Newton thrusters, cannot be tested on-ground. Therefore, a precursor satellite, LISA Pathfinder, has been developed as a technology demonstration mission. The preparations for the LISA Pathfinder mission have reached an exciting stage. Tests of the engineering model of the optical metrology system have recently been completed at the Albert Einstein Institute, Hannover, and flight model tests are now underway. Significantly, they represent the first complete integration and testing of the space-qualified hardware and are the first tests on system level. The results and test procedures of these campaigns will be utilised directly in the ground-based flight hardware tests, and subsequently within in-flight operations. In addition, they allow valuable testing of the data analysis methods using the MatLab based LTP data analysis toolbox. This contribution presents an overview of the test campaigns calibration, control and perfor-mance results, focusing on the implications for the Experimental Master Plan which provides the basis for the in-flight operations and procedures.

  1. Transform coding for hardware-accelerated volume rendering.

    Science.gov (United States)

    Fout, Nathaniel; Ma, Kwan-Liu

    2007-01-01

    Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.

  2. Optimizing memory-bound SYMV kernel on GPU hardware accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2013-01-01

    Hardware accelerators are becoming ubiquitous high performance scientific computing. They are capable of delivering an unprecedented level of concurrent execution contexts. High-level programming language extensions (e.g., CUDA), profiling tools (e.g., PAPI-CUDA, CUDA Profiler) are paramount to improve productivity, while effectively exploiting the underlying hardware. We present an optimized numerical kernel for computing the symmetric matrix-vector product on nVidia Fermi GPUs. Due to its inherent memory-bound nature, this kernel is very critical in the tridiagonalization of a symmetric dense matrix, which is a preprocessing step to calculate the eigenpairs. Using a novel design to address the irregular memory accesses by hiding latency and increasing bandwidth, our preliminary asymptotic results show 3.5x and 2.5x fold speedups over the similar CUBLAS 4.0 kernel, and 7-8% and 30% fold improvement over the Matrix Algebra on GPU and Multicore Architectures (MAGMA) library in single and double precision arithmetics, respectively. © 2013 Springer-Verlag.

  3. The hardware track finder processor in CMS at CERN

    International Nuclear Information System (INIS)

    Kluge, A.

    1997-07-01

    The work covers the design of the Track Finder Processor in the high energy experiment CMS at CERN/Geneva. The task of this processor is to identify muons and to measure their transverse momentum. The Track Finder makes it possible to determine the physical relevance of each high energetic collision and to forward only interesting data to the data analysis units. Data of more than two hundred thousand detector cells are used to determine the location of muons and to measure their transverse momentum. Each 25 ns a new data set is generated. Measurement of location and transverse momentum of the muons can be terminated within 350 ns by using an ASIC. The classical method in high energy physics experiments is to employ a pattern comparison method. The predefined patterns are compared to the found patterns. The high number of data channels and the complex requirements to the spatial detector resolution do not permit to employ a pattern comparison method. A so called track following algorithm was designed, which is able to assemble complete tracks through the whole detector starting from single track segments. Instead of storing a high number of track patterns the problem is brought back to the algorithm level. Comprehensive simulations, employing the hardware simulation language VHDL, were conducted in order to optimize the algorithm and its hardware implementation. A FPGA (field program able gate array)-prototype was designed. A feasibility study to implement the track finder processor employing ASICs was conducted. (author)

  4. Proposed hardware architectures of particle filter for object tracking

    Science.gov (United States)

    Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED

    2012-12-01

    In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.

  5. Error Probability Analysis of Hardware Impaired Systems with Asymmetric Transmission

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salama S.; Alouini, Mohamed-Slim

    2018-01-01

    Error probability study of the hardware impaired (HWI) systems highly depends on the adopted model. Recent models have proved that the aggregate noise is equivalent to improper Gaussian signals. Therefore, considering the distinct noise nature and self-interfering (SI) signals, an optimal maximum likelihood (ML) receiver is derived. This renders the conventional minimum Euclidean distance (MED) receiver as a sub-optimal receiver because it is based on the assumptions of ideal hardware transceivers and proper Gaussian noise in communication systems. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds and approximations are derived for various adopted systems including transmitter and receiver I/Q imbalanced systems with or without transmitter distortions as well as transmitter or receiver only impaired systems. Motivated by recent studies that shed the light on the benefit of improper Gaussian signaling in mitigating the HWIs, asymmetric quadrature amplitude modulation or phase shift keying is optimized and adapted for transmission. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver, the tightness of the derived bounds and effectiveness of asymmetric transmission in dampening HWIs and improving overall system performance

  6. A novel hardware implementation for detecting respiration rate using photoplethysmography.

    Science.gov (United States)

    Prinable, Joseph; Jones, Peter; Thamrin, Cindy; McEwan, Alistair

    2017-07-01

    Asthma is a serious public health problem. Continuous monitoring of breathing may offer an alternative way to assess disease status. In this paper we present a novel hardware implementation for the capture and storage of a photoplethysmography (PPG) signal. The LED duty cycle was altered to determine the effect on respiratory rate accuracy. The oximeter was mounted to the left index finger of ten healthy volunteers. The breathing rate derived from the oximeter was validated against a nasal airflow sensor. The duty cycle of a pulse oximeter was changed between 5%, 10% and 25% at a sample rate of 500 Hz. A PPG signal and reference signal was captured for each duty cycle. The PPG signals were post processed in Matlab to derive a respiration rate using an existing Matlab toolbox. At a 25% duty cycle the RMSE was <;2 breaths per minute for the top performing algorithm. The RMSE increased to over 5 breaths per minute when the duty cycle was reduced to 5%. The power consumed by the hardware for a 5%, 10% and 25% duty cycle was 5.4 mW, 7.8 mW, and 15 mW respectively. For clinical assessment of respiratory rate, a RSME of <;2 breaths per minute is recommended. Further work is required to determine utility in asthma management. However for non-clinical applications such as fitness tracking, lower accuracy may be sufficient to allow a reduced duty cycle setting.

  7. A hardware fast tracker for the ATLAS trigger

    Science.gov (United States)

    Asbah, Nedaa

    2016-09-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 1034 cm-2 s-1. After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.

  8. Error Probability Analysis of Hardware Impaired Systems with Asymmetric Transmission

    KAUST Repository

    Javed, Sidrah

    2018-04-26

    Error probability study of the hardware impaired (HWI) systems highly depends on the adopted model. Recent models have proved that the aggregate noise is equivalent to improper Gaussian signals. Therefore, considering the distinct noise nature and self-interfering (SI) signals, an optimal maximum likelihood (ML) receiver is derived. This renders the conventional minimum Euclidean distance (MED) receiver as a sub-optimal receiver because it is based on the assumptions of ideal hardware transceivers and proper Gaussian noise in communication systems. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds and approximations are derived for various adopted systems including transmitter and receiver I/Q imbalanced systems with or without transmitter distortions as well as transmitter or receiver only impaired systems. Motivated by recent studies that shed the light on the benefit of improper Gaussian signaling in mitigating the HWIs, asymmetric quadrature amplitude modulation or phase shift keying is optimized and adapted for transmission. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver, the tightness of the derived bounds and effectiveness of asymmetric transmission in dampening HWIs and improving overall system performance

  9. PCI hardware support in LIA-2 control system

    International Nuclear Information System (INIS)

    Bolkhovityanov, D.; Cheblakov, P.

    2012-01-01

    The control system of the LIA-2 accelerator is built on cPCI crates with *86- compatible processor boards running Linux. Slow electronics is connected via CAN-bus, while fast electronics (4 MHz and 200 MHz fast ADCs and 200 MHz timers) are implemented as cPCI/PMC modules. Several ways to drive PCI control electronics in Linux were examined. Finally a user-space drivers approach was chosen. These drivers communicate with hardware via a small kernel module, which provides access to PCI BARs and to interrupt handling. This module was named USPCI (User-Space PCI access). This approach dramatically simplifies creation of drivers, as opposed to kernel drivers, and provides high reliability (because only a tiny and thoroughly-debugged piece of code runs in kernel). LIA-2 accelerator was successfully commissioned, and the solution chosen has proven adequate and very easy to use. Besides, USPCI turned out to be a handy tool for examination and debugging of PCI devices direct from command-line. In this paper available approaches to work with PCI control hardware in Linux are considered, and USPCI architecture is described. (authors)

  10. Solar cooling in the hardware-in-the-loop test; Solare Kuehlung im Hardware-in-the-Loop-Test

    Energy Technology Data Exchange (ETDEWEB)

    Lohmann, Sandra; Radosavljevic, Rada; Goebel, Johannes; Gottschald, Jonas; Adam, Mario [Fachhochschule Duesseldorf (Germany). Erneuerbare Energien und Energieeffizienz E2

    2012-07-01

    The first part of the BMBF-funded research project 'Solar cooling in the hardware-in-the-loop test' (SoCool HIL) deals with the simulation of a solar refrigeration system using the simulation environment Matlab / Simulink with the toolboxes Stateflow and Carnot. Dynamic annual simulations and DoE supported parameter variations were used to select meaningful system configurations, control strategies and dimensioning of components. The second part of this project deals with hardware-in-the-loop tests using the 17.5 kW absorption chiller of the company Yazaki Europe Limited (Hertfordshire, United Kingdom). For this, the chiller is operated on a test bench in order to emulate the behavior of other system components (solar circuit with heat storage, recooling, buildings and cooling distribution / transfer). The chiller is controlled by a simulation of the system using MATLAB / Simulink / Carnot. Based on the knowledge on the real dynamic performance of the chiller the simulation model of the chiller can then be validated. Further tests are used to optimize the control of the chiller to the current cooling load. In addition, some changes in system configurations (for example cold backup) are tested with the real machine. The results of these tests and the findings on the dynamic performance of the chiller are presented.

  11. Accelerating epistasis analysis in human genetics with consumer graphics hardware

    Directory of Open Access Journals (Sweden)

    Cancare Fabio

    2009-07-01

    Full Text Available Abstract Background Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs have more memory bandwidth and computational capability than Central Processing Units (CPUs and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. Findings We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective

  12. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    Science.gov (United States)

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other

  13. Multistage switching hardware and software implementations for student experiment purpose

    Science.gov (United States)

    Sani, A.; Suherman

    2018-02-01

    Current communication and internet networks are underpinned by the switching technologies that interconnect one network to the others. Students’ understanding on networks rely on how they conver the theories. However, understanding theories without touching the reality may exert spots in the overall knowledge. This paper reports the progress of the multistage switching design and implementation for student laboratory activities. The hardware and software designs are based on three stages clos switching architecture with modular 2x2 switches, controlled by an arduino microcontroller. The designed modules can also be extended for batcher and bayan switch, and working on circuit and packet switching systems. The circuit analysis and simulation show that the blocking probability for each switch combinations can be obtained by generating random or patterned traffics. The mathematic model and simulation analysis shows 16.4% blocking probability differences as the traffic generation is uniform. The circuits design components and interfacing solution have been identified to allow next step implementation.

  14. Cognon Neural Model Software Verification and Hardware Implementation Design

    Science.gov (United States)

    Haro Negre, Pau

    Little is known yet about how the brain can recognize arbitrary sensory patterns within milliseconds using neural spikes to communicate information between neurons. In a typical brain there are several layers of neurons, with each neuron axon connecting to ˜104 synapses of neurons in an adjacent layer. The information necessary for cognition is contained in theses synapses, which strengthen during the learning phase in response to newly presented spike patterns. Continuing on the model proposed in "Models for Neural Spike Computation and Cognition" by David H. Staelin and Carl H. Staelin, this study seeks to understand cognition from an information theoretic perspective and develop potential models for artificial implementation of cognition based on neuronal models. To do so we focus on the mathematical properties and limitations of spike-based cognition consistent with existing neurological observations. We validate the cognon model through software simulation and develop concepts for an optical hardware implementation of a network of artificial neural cognons.

  15. On the Achievable Rate of Hardware-Impaired Transceiver Systems

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salama S.; Alouini, Mohamed-Slim

    2018-01-01

    In this paper, we accurately model the transceiver hardware impairments (HWIs) of multiple-input multiple-output (MIMO) systems considering different HWI stages at transmitter and receiver. The proposed novel statistical model shows that transceiver HWIs transform the transmitted symmetric signal to asymmetric one. Moreover, it shows that the aggregate self-interference has asymmetric characteristics. Therefore, we propose improper Gaussian signaling (IGS) for transmission in order to improve the achievable rate performance. IGS is considered as a general signaling scheme which includes the proper Gaussian signaling (PGS) as a special case. Thus, IGS has additional design parameters which enable it to mitigate the HWI self-interference. As a case study, we analyze the achievable rate performance of single-input multiple-output systems with linear and selection combiner. Furthermore, we optimize the IGS statistical characteristics for interference alignment. This improves the achievable rate performance as compared to the PGS, which is validated through numerical results.

  16. CT and MRI techniques for imaging around orthopedic hardware

    Energy Technology Data Exchange (ETDEWEB)

    Do, Thuy Duong; Skornitzke, Stephan; Weber, Marc-Andre [Heidelberg Univ. (Germany). Dept. of Clinical Radiology; Sutter, Reto [Uniklinik Balgrist, Zurich (Switzerland). Radiology

    2018-01-15

    Orthopedic hardware impairs image quality in cross-sectional imaging. With an increasing number of orthopedic implants in an aging population, the need to mitigate metal artifacts in computed tomography and magnetic resonance imaging is becoming increasingly relevant. This review provides an overview of the major artifacts in CT and MRI and state-of-the-art solutions to improve image quality. All steps of image acquisition from device selection, scan preparations and parameters to image post-processing influence the magnitude of metal artifacts. Technological advances like dual-energy CT with the possibility of virtual monochromatic imaging (VMI) and new materials offer opportunities to further reduce artifacts in CT and MRI. Dedicated metal artifact reduction sequences contain algorithms to reduce artifacts and improve imaging of surrounding tissue and are essential tools in orthopedic imaging to detect postoperative complications in early stages.

  17. Hardware authentication using transmission spectra modified optical fiber

    International Nuclear Information System (INIS)

    Grubbs, Robert K.; Romero, Juan A.

    2010-01-01

    The ability to authenticate the source and integrity of data is critical to the monitoring and inspection of special nuclear materials, including hardware related to weapons production. Current methods rely on electronic encryption/authentication codes housed in monitoring devices. This always invites the question of implementation and protection of authentication information in an electronic component necessitating EMI shielding, possibly an on board power source to maintain the information in memory. By using atomic layer deposition techniques (ALD) on photonic band gap (PBG) optical fibers we will explore the potential to randomly manipulate the output spectrum and intensity of an input light source. This randomization could produce unique signatures authenticating devices with the potential to authenticate data. An external light source projected through the fiber with a spectrometer at the exit would 'read' the unique signature. No internal power or computational resources would be required.

  18. Programming languages and compiler design for realistic quantum hardware

    Science.gov (United States)

    Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret

    2017-09-01

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  19. Impact of Improper Gaussian Signaling on Hardware Impaired Systems

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salam S.; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, we accurately model the hardware impairments (HWI) as improper Gaussian signaling (IGS) which can characterize the asymmetric characteristics of different HWI sources. The proposed model encourages us to adopt IGS scheme for transmitted signal that represents a general study compared with the conventional scheme, proper Gaussian signaling (PGS). First, we express the achievable rate of HWI systems when both PGS and IGS schemes are used when the aggregate effect of HWI is modeled as IGS. Moreover, we tune the IGS statistical characteristics to maximize the achievable rate. Then, we analyze the outage probability for both schemes and derive closed form expressions. Finally, we validate the analytic expressions through numerical and simulation results. In addition, we quantify through the numerical results the performance degradation in the absence of ideal transceivers and the gain reaped from adopting IGS scheme compared with PGS scheme.

  20. Acquisition of reliable vacuum hardware for large accelerator systems

    International Nuclear Information System (INIS)

    Welch, K.M.

    1995-01-01

    Credible and effective communications prove to be the major challenge in the acquisition of reliable vacuum hardware. Technical competence is necessary but not sufficient. The authors must effectively communicate with management, sponsoring agencies, project organizations, service groups, staff and with vendors. Most of Deming's 14 quality assurance tenants relate to creating an enlightened environment of good communications. All projects progress along six distinct, closely coupled, dynamic phases. All six phases are in a state of perpetual change. These phases and their elements are discussed, with emphasis given to the acquisition phase and its related vocabulary. Large projects require great clarity and rigor as poor communications can be costly. For rigor to be cost effective, it can't be pedantic. Clarity thrives best in a low-risk, team environment

  1. HARDWARE IMPLEMENTATION OF SECURE AODV FOR WIRELESS SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    S. Sharmila

    2010-12-01

    Full Text Available Wireless Sensor Networks are extremely vulnerable to any kind of routing attacks due to several factors such as wireless transmission and resource-constrained nodes. In this respect, securing the packets is of great importance when designing the infrastructure and protocols of sensor networks. This paper describes the hardware architecture of secure routing for wireless sensor networks. The routing path is selected using Ad-hoc on demand distance vector routing protocol (AODV. The data packets are converted into digest using hash functions. The functionality of the proposed method is modeled using Verilog HDL in MODELSIM simulator and the performance is compared with various target devices. The results show that the data packets are secured and defend against the routing attacks with minimum energy consumption.

  2. Performance and system flexibility of the CDF Hardware Event Builder

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, T.M.; Schurecht, K. (Fermi National Accelerator Lab., Batavia, IL (United States)); Sinervo, P. (Toronto Univ., ON (Canada). Dept. of Physics)

    1991-11-01

    The CDF Hardware Event Builder (1) is a flexible system which is built from a combination of three different 68020-based single width Fastbus modules. The system may contain as few as three boards or as many as fifteen, depending on the specific application. Functionally, the boards receive a command to read out the raw event data from a set of Fastbus based data buffers ( scanners''), reformat data and then write the data to a Level 3 trigger/processing farm which will decide to throw the event away or to write it to tape. The data acquisition system at CDF will utilize two nine board systems which will allow an event rate of up to 35 Hz into the Level 3 trigger. This paper will present detailed performance factors, system and individual board architecture, and possible system configurations.

  3. Comparison of spike-sorting algorithms for future hardware implementation.

    Science.gov (United States)

    Gibson, Sarah; Judy, Jack W; Markovic, Dejan

    2008-01-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.

  4. Hardware Testing for the Optical PAyload for Lasercomm Science (OPALS)

    Science.gov (United States)

    Slagle, Amanda

    2011-01-01

    Hardware for several subsystems of the proposed Optical PAyload for Lasercomm Science (OPALS), including the gimbal and avionics, was tested. Microswitches installed on the gimbal were evaluated to verify that their point of actuation would remain within the acceptable range even if the switches themselves move slightly during launch. An inspection of the power board was conducted to ensure that all power and ground signals were isolated, that polarized components were correctly oriented, and that all components were intact and securely soldered. Initial testing on the power board revealed several minor problems, but once they were fixed the power board was shown to function correctly. All tests and inspections were documented for future use in verifying launch requirements.

  5. Simple Approach For Induction Motor Control Using Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    József VÁSÁRHELYI

    2002-12-01

    Full Text Available The paper deals with rotor-field-oriented vector control structures for the induction motor drives fed by the so-called tandem frequency converter. It is composed of two different types of DC-link converters connected in parallel arrangement. The larger-power one has current-source character and is operating synchronized in time and in amplitude with the stator currents. The other one has voltage-source character and it is the actuator of the motor control system. The drive is able to run also with partial-failed tandem converter, if the control strategy corresponds to the actual operating mode. A reconfigurable hardware implemented in configurable logic cells ensures the changing of the vector-control structure. The proposed control schemes were tested by simulation based on Matlab-Simulink model.

  6. On the Achievable Rate of Hardware-Impaired Transceiver Systems

    KAUST Repository

    Javed, Sidrah

    2018-01-15

    In this paper, we accurately model the transceiver hardware impairments (HWIs) of multiple-input multiple-output (MIMO) systems considering different HWI stages at transmitter and receiver. The proposed novel statistical model shows that transceiver HWIs transform the transmitted symmetric signal to asymmetric one. Moreover, it shows that the aggregate self-interference has asymmetric characteristics. Therefore, we propose improper Gaussian signaling (IGS) for transmission in order to improve the achievable rate performance. IGS is considered as a general signaling scheme which includes the proper Gaussian signaling (PGS) as a special case. Thus, IGS has additional design parameters which enable it to mitigate the HWI self-interference. As a case study, we analyze the achievable rate performance of single-input multiple-output systems with linear and selection combiner. Furthermore, we optimize the IGS statistical characteristics for interference alignment. This improves the achievable rate performance as compared to the PGS, which is validated through numerical results.

  7. Impact of Improper Gaussian Signaling on Hardware Impaired Systems

    KAUST Repository

    Javed, Sidrah

    2016-12-18

    In this paper, we accurately model the hardware impairments (HWI) as improper Gaussian signaling (IGS) which can characterize the asymmetric characteristics of different HWI sources. The proposed model encourages us to adopt IGS scheme for transmitted signal that represents a general study compared with the conventional scheme, proper Gaussian signaling (PGS). First, we express the achievable rate of HWI systems when both PGS and IGS schemes are used when the aggregate effect of HWI is modeled as IGS. Moreover, we tune the IGS statistical characteristics to maximize the achievable rate. Then, we analyze the outage probability for both schemes and derive closed form expressions. Finally, we validate the analytic expressions through numerical and simulation results. In addition, we quantify through the numerical results the performance degradation in the absence of ideal transceivers and the gain reaped from adopting IGS scheme compared with PGS scheme.

  8. Using EPICS enabled industrial hardware for upgrading control systems

    International Nuclear Information System (INIS)

    Bjorkland, Eric A.; Veeramani, Arun; Debelle, Thierry

    2009-01-01

    Los Alamos National Laboratory has been working with National Instruments (NI) and Cosy lab to implement EPICS Input Output Controller (IOC) software that runs directly on NI CompactRIO Real Time Controller (RTC) and communicates with NI LabVIEW through a shared memory interface. In this presentation, we will discuss our current progress in upgrading the control system at the Los Alamos Neutron Science Centre (LANSCE) and what we have learned about integrating CompactRIO into large experimental physics facilities. We will also discuss the implications of using Channel Access Server for LabVIEW which will enable more commercial hardware platforms to be used in upgrading existing facilities or in commissioning new ones.

  9. Magnetic Gimbal Proof-of-Concept Hardware performance results

    Science.gov (United States)

    Stuart, Keith O.

    1993-01-01

    The Magnetic Gimbal Proof-of-Concept Hardware activities, accomplishments, and test results are discussed. The Magnetic Gimbal Fabrication and Test (MGFT) program addressed the feasibility of using a magnetic gimbal to isolate an Electro-Optical (EO) sensor from the severe angular vibrations induced during the firing of divert and attitude control system (ACS) thrusters during space flight. The MGFT effort was performed in parallel with the fabrication and testing of a mechanically gimballed, flex pivot based isolation system by the Hughes Aircraft Missile Systems Group. Both servo systems supported identical EO sensor assembly mockups to facilitate direct comparison of performance. The results obtained from the MGFT effort indicate that the magnetic gimbal exhibits the ability to provide significant performance advantages over alternative mechanically gimballed techniques.

  10. Architecture and development of the CDF hardware event builder

    International Nuclear Information System (INIS)

    Shaw, T.M.; Booth, A.W.; Bowden, M.

    1989-01-01

    A hardware Event Builder (EVB) has been developed for use at the Collider Detector experiment at Fermi National Accelerator (CDF). the Event builder presently consists of five FASTBUS modules and has the task of reading out the front end scanners, reformatting the data into YBOS bank structure, and transmitting the data to a Level 3 (L3) trigger system which is composed of multiple VME processing nodes. The Event Builder receives its instructions from a VAX based Buffer Manager (BFM) program via a Unibus Processor Interface (UPI). The Buffer Manager instructs the Event Builder to read out one of the four CDF front end buffers. The Event Builder then informs the Buffer Manager when the event has been formatted and then is instructed to push it up to the L3 trigger system. Once in the L3 system, a decision is made as to whether to write the event to tape

  11. Hardware-in-the-loop grid simulator system and method

    Science.gov (United States)

    Fox, John Curtiss; Collins, Edward Randolph; Rigas, Nikolaos

    2017-05-16

    A hardware-in-the-loop (HIL) electrical grid simulation system and method that combines a reactive divider with a variable frequency converter to better mimic and control expected and unexpected parameters in an electrical grid. The invention provides grid simulation in a manner to allow improved testing of variable power generators, such as wind turbines, and their operation once interconnected with an electrical grid in multiple countries. The system further comprises an improved variable fault reactance (reactive divider) capable of providing a variable fault reactance power output to control a voltage profile, therein creating an arbitrary recovery voltage. The system further comprises an improved isolation transformer designed to isolate zero-sequence current from either a primary or secondary winding in a transformer or pass the zero-sequence current from a primary to a secondary winding.

  12. Operation and Monitoring of the CMS Regional Calorimeter Trigger Hardware

    CERN Document Server

    Klabbers, P

    2008-01-01

    The electronics for the Regional Calorimeter Trigger (RCT) of the Compact Muon Solenoid Experiment (CMS) have been produced, tested, and installed. The RCT hardware consists of one clock distribution crate and 18 double-sided crates containing custom boards, ASICs, and backplanes. The RCT receives 8-bit energies and a data quality bit from the HCAL and ECAL Trigger Primitive Generators (TPGs) and sends it to the CMS Global Calorimeter Trigger (GCT) after processing. Integration tests with the TPG and GCT subsystems have been successful. Installation is complete and the RCT is integrated into the Level-1 Trigger chain. Data taking has begun using detector noise, cosmic rays, proton-beam debris, and beamhalo muons. The operation and configuration of the RCT is a completely automated process. The tools to monitor, operate, and debug the RCT are mature and will be described in detail, as well as the results from data taking with the RCT.

  13. 4273π: bioinformatics education on low cost ARM hardware.

    Science.gov (United States)

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  14. Programming languages and compiler design for realistic quantum hardware.

    Science.gov (United States)

    Chong, Frederic T; Franklin, Diana; Martonosi, Margaret

    2017-09-13

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  15. Hardware accelerator design for tracking in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  16. Hardware accelerator design for change detection in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.

  17. An Overview of Reconfigurable Hardware in Embedded Systems

    Directory of Open Access Journals (Sweden)

    Wenyin Fu

    2006-09-01

    Full Text Available Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.

  18. GNSS CORS hardware and software enabling new science

    Science.gov (United States)

    Drummond, P.

    2009-12-01

    GNSS CORS networks are enabling new opportunities for science and public and private sector business. This paper will explore how the newest geodetic monitoring software and GNSS receiver hardware from Trimble Navigation Ltd are enabling new science. Technology trends and science opportunities will be explored. These trends include the installation of active GNSS control, automation of observations and processing, and the advantages of multi-observable and multi-constellation observations, all performed with the use of off the shelf products and industry standard open-source data formats. Also the possibilities with moving science from an after-the-fact postprocessed model to a real-time epoch-by-epoch solution will be explored. This presentation will also discuss the combination of existing GNSS CORS networks with project specific installations used for monitoring. Experience is showing GNSS is able to provide higher resolution data than previous methods, providing new tools for science, decision makers and financial planners.

  19. Three Realizations and Comparison of Hardware for Piezoresistive Tactile Sensors

    Science.gov (United States)

    Vidal-Verdú, Fernando; Oballe-Peinado, Óscar; Sánchez-Durán, José A.; Castellanos-Ramos, Julián; Navas-González, Rafael

    2011-01-01

    Tactile sensors are basically arrays of force sensors that are intended to emulate the skin in applications such as assistive robotics. Local electronics are usually implemented to reduce errors and interference caused by long wires. Realizations based on standard microcontrollers, Programmable Systems on Chip (PSoCs) and Field Programmable Gate Arrays (FPGAs) have been proposed by the authors for the case of piezoresistive tactile sensors. The solution employing FPGAs is especially relevant since their performance is closer to that of Application Specific Integrated Circuits (ASICs) than that of the other devices. This paper presents an implementation of such an idea for a specific sensor. For the purpose of comparison, the circuitry based on the other devices is also made for the same sensor. This paper discusses the implementation issues, provides details regarding the design of the hardware based on the three devices and compares them. PMID:22163797

  20. Hardware and software for physical assessment work and health students

    Directory of Open Access Journals (Sweden)

    Олександр Юрійович Азархов

    2016-11-01

    Full Text Available The hardware and software used to assess the state of the students’ health by means of information technology were described in the article and displayed in the form of PEAC – (physical efficiency assessment channel. The list of the diseases that students often suffer from has been prepared for which minimum number of informative primary biosignals have been selected. The structural scheme PEAC has been made up, the ways to form and calculate the secondary parameters for evaluating the health of students have been shown. The resulting criteria, indices, indicators and parameters grouped in a separate table for ease of use, are also presented in the article. The given list necessitates the choice of vital activities parameters, which are further to be used as the criteria for primary express-diagnostics of the health state according to such indicators as electrocardiogram, photoplethysmogram, spirogram, blood pressure, body mass length, dynamometry. But these indicators (qualitative should be supplemented with measurement methods which provide quantitative component of an indicator. This method makes it possible to obtain assessments of students’ health with desired properties. Channel of the student physical disability assessment, along with the channel of activity comprehensive evaluation and decision support subsystem ensure assessment of the student's health with all aspects of his activity and professional training, thereby creating adequate algorithm of his behavior that provides maximum health, longevity and professional activities. The basic requirements for hardware have been formed, and they are, minimum number of information-measuring channels; high noise stability of information-measuring channels; comfort, providing normal activity of a student; small dimensions, weight and power consumption; simplicity, and in some cases service authorization

  1. 2D to 3D conversion implemented in different hardware

    Science.gov (United States)

    Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli

    2015-02-01

    Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.

  2. An integrable low-cost hardware random number generator

    Science.gov (United States)

    Ranasinghe, Damith C.; Lim, Daihyun; Devadas, Srinivas; Jamali, Behnam; Zhu, Zheng; Cole, Peter H.

    2005-02-01

    A hardware random number generator is different from a pseudo-random number generator; a pseudo-random number generator approximates the assumed behavior of a real hardware random number generator. Simple pseudo random number generators suffices for most applications, however for demanding situations such as the generation of cryptographic keys, requires an efficient and a cost effective source of random numbers. Arbiter-based Physical Unclonable Functions (PUFs) proposed for physical authentication of ICs exploits statistical delay variation of wires and transistors across integrated circuits, as a result of process variations, to build a secret key unique to each IC. Experimental results and theoretical studies show that a sufficient amount of variation exits across IC"s. This variation enables each IC to be identified securely. It is possible to exploit the unreliability of these PUF responses to build a physical random number generator. There exists measurement noise, which comes from the instability of an arbiter when it is in a racing condition. There exist challenges whose responses are unpredictable. Without environmental variations, the responses of these challenges are random in repeated measurements. Compared to other physical random number generators, the PUF-based random number generators can be a compact and a low-power solution since the generator need only be turned on when required. A 64-stage PUF circuit costs less than 1000 gates and the circuit can be implemented using a standard IC manufacturing processes. In this paper we have presented a fast and an efficient random number generator, and analysed the quality of random numbers produced using an array of tests used by the National Institute of Standards and Technology to evaluate the randomness of random number generators designed for cryptographic applications.

  3. Software and Hardware Infrastructure for Research in Electrophysiology

    Directory of Open Access Journals (Sweden)

    Roman eMouček

    2014-03-01

    Full Text Available As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly.This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research.After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software.

  4. Software and hardware infrastructure for research in electrophysiology.

    Science.gov (United States)

    Mouček, Roman; Ježek, Petr; Vařeka, Lukáš; Rondík, Tomáš; Brůha, Petr; Papež, Václav; Mautner, Pavel; Novotný, Jiří; Prokop, Tomáš; Stěbeták, Jan

    2014-01-01

    As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly. This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research. After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software.

  5. Comparación computacional de estructuras de proteínas. Aplicación al estudio de un inhibidor de carboxipeptidasa como agente antitumoral

    OpenAIRE

    Mas Benavente, José Manuel

    2001-01-01

    Consultable des del TDX Títol obtingut de la portada digitalitzada El objetivo general de esta tesis se enmarca en un proyecto más amplio de ingeniería de proteínas que trata de analizar y rediseñar la estructura, camino de plegamiento, función natural y aplicaciones biotecnológicas de una proteína, el PCI (Potato Carboxypeptidase Inhibitor). Las características estructurales de esta proteína, esencialmente sus puentes de azufre, nos invitaron a realizar un estudio general en proteínas ...

  6. Desenvolvimento de hardware reconfigurável de criptografia assimétrica

    Directory of Open Access Journals (Sweden)

    Otávio Souza Martins Gomes

    2015-01-01

    Full Text Available Este artigo apresenta o resultado parcial do desenvolvimento de uma interface de hardware reconfigurável para criptografia assimétrica que permite a troca segura de dados. Hardwares reconfiguráveis permitem o desenvolvimento deste tipo de dispositivo com segurança e flexibilidade e possibilitam a mudança de características no projeto com baixo custo e de forma rápida.Palavras-chave: Criptografia. Hardware. ElGamal. FPGA. Segurança. Development of an asymmetric cryptography reconfigurable harwadre ABSTRACTThis paper presents some conclusions and choices about the development of an asymmetric cryptography reconfigurable hardware interface to allow a safe data communication. Reconfigurable hardwares allows the development of this kind of device with safety and flexibility, and offer the possibility to change some features with low cost and in a fast way.Keywords: Cryptography. Hardware. ElGamal. FPGAs. Security.

  7. Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system

    Directory of Open Access Journals (Sweden)

    Daniel Brüderle

    2009-06-01

    Full Text Available Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.

  8. Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.

    Science.gov (United States)

    Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz

    2009-01-01

    Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.

  9. Removal of symptomatic craniofacial titanium hardware following craniotomy: Case series and review

    Directory of Open Access Journals (Sweden)

    Sheri K. Palejwala

    2015-06-01

    Full Text Available Titanium craniofacial hardware has become commonplace for reconstruction and bone flap fixation following craniotomy. Complications of titanium hardware include palpability, visibility, infection, exposure, pain, and hardware malfunction, which can necessitate hardware removal. We describe three patients who underwent craniofacial reconstruction following craniotomies for trauma with post-operative courses complicated by medically intractable facial pain. All three patients subsequently underwent removal of the symptomatic craniofacial titanium hardware and experienced rapid resolution of their painful parasthesias. Symptomatic plates were found in the region of the frontozygomatic suture or MacCarty keyhole, or in close proximity with the supraorbital nerve. Titanium plates, though relatively safe and low profile, can cause local nerve irritation or neuropathy. Surgeons should be cognizant of the potential complications of titanium craniofacial hardware and locations that are at higher risk for becoming symptomatic necessitating a second surgery for removal.

  10. 15 MW HArdware-in-the-loop Grid Simulation Project

    Energy Technology Data Exchange (ETDEWEB)

    Rigas, Nikolaos [Clemson Univ., SC (United States); Fox, John Curtiss [Clemson Univ., SC (United States); Collins, Randy [Clemson Univ., SC (United States); Tuten, James [Clemson Univ., SC (United States); Salem, Thomas [Clemson Univ., SC (United States); McKinney, Mark [Clemson Univ., SC (United States); Hadidi, Ramtin [Clemson Univ., SC (United States); Gislason, Benjamin [Clemson Univ., SC (United States); Boessneck, Eric [Clemson Univ., SC (United States); Leonard, Jesse [Clemson Univ., SC (United States)

    2014-10-31

    The 15MW Hardware-in-the-loop (HIL) Grid Simulator project was to (1) design, (2) construct and (3) commission a state-of-the-art grid integration testing facility for testing of multi-megawatt devices through a ‘shared facility’ model open to all innovators to promote the rapid introduction of new technology in the energy market to lower the cost of energy delivered. The 15 MW HIL Grid Simulator project now serves as the cornerstone of the Duke Energy Electric Grid Research, Innovation and Development (eGRID) Center. This project leveraged the 24 kV utility interconnection and electrical infrastructure of the US DOE EERE funded WTDTF project at the Clemson University Restoration Institute in North Charleston, SC. Additionally, the project has spurred interest from other technology sectors, including large PV inverter and energy storage testing and several leading edge research proposals dealing with smart grid technologies, grid modernization and grid cyber security. The key components of the project are the power amplifier units capable of providing up to 20MW of defined power to the research grid. The project has also developed a one of a kind solution to performing fault ride-through testing by combining a reactive divider network and a large power converter into a hybrid method. This unique hybrid method of performing fault ride-through analysis will allow for the research team at the eGRID Center to investigate the complex differences between the alternative methods of performing fault ride-through evaluations and will ultimately further the science behind this testing. With the final goal of being able to perform HIL experiments and demonstration projects, the eGRID team undertook a significant challenge with respect to developing a control system that is capable of communicating with several different pieces of equipment with different communication protocols in real-time. The eGRID team developed a custom fiber optical network that is based upon FPGA

  11. Hardware replacements and software tools for digital control computers

    International Nuclear Information System (INIS)

    Walker, R.A.P.; Wang, B-C.; Fung, J.

    1996-01-01

    Technological obsolescence is an on-going challenge for all computer use. By design, and to some extent good fortune, AECL has had a good track record with respect to the march of obsolescence in CANDU digital control computer technology. Recognizing obsolescence as a fact of life, AECL has undertaken a program of supporting the digital control technology of existing CANDU plants. Other AECL groups are developing complete replacement systems for the digital control computers, and more advanced systems for the digital control computers of the future CANDU reactors. This paper presents the results of the efforts of AECL's DCC service support group to replace obsolete digital control computer and related components and to provide friendlier software technology related to the maintenance and use of digital control computers in CANDU. These efforts are expected to extend the current lifespan of existing digital control computers through their mandated life. This group applied two simple rules; the product, whether new or replacement should have a generic basis, and the products should be applicable to both existing CANDU plants and to 'repeat' plant designs built using current design guidelines. While some exceptions do apply, the rules have been met. The generic requirement dictates that the product should not be dependent on any brand technology, and should back-fit to and interface with any such technology which remains in the control design. The application requirement dictates that the product should have universal use and be user friendly to the greatest extent possible. Furthermore, both requirements were designed to anticipate user involvement, modifications and alternate user defined applications. The replacements for hardware components such as paper tape reader/punch, moving arm disk, contact scanner and Ramtek are discussed. The development of these hardware replacements coincide with the development of a gateway system for selected CANDU digital control

  12. Development of Network Interface Cards for TRIDAQ systems with the NaNet framework

    International Nuclear Information System (INIS)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Cicero, F. Lo; Lonardo, A.; Martinelli, M.; Paolucci, P.S.; Pastorelli, E.; Simula, F.; Valente, P.; Vicini, P.; Lorenzo, S. Di; Piandani, R.; Pontisso, L.; Sozzi, M.; Fiorini, M.; Neri, I.; Lamanna, G.; Rossetti, D.

    2017-01-01

    NaNet is a framework for the development of FPGA-based PCI Express (PCIe) Network Interface Cards (NICs) with real-time data transport architecture that can be effectively employed in TRIDAQ systems. Key features of the architecture are the flexibility in the configuration of the number and kind of the I/O channels, the hardware offloading of the network protocol stack, the stream processing capability, and the zero-copy CPU and GPU Remote Direct Memory Access (RDMA). Three NIC designs have been developed with the NaNet framework: NaNet-1 and NaNet-10 for the CERN NA62 low level trigger and NaNet 3 for the KM3NeT-IT underwater neutrino telescope DAQ system. We will focus our description on the NaNet-10 design, as it is the most complete of the three in terms of capabilities and integrated IPs of the framework.

  13. CADUB GHF: um programa computacional para cálculo da quantidade de fertilizantes e corretivos da acidez do solo para culturas produtoras de grãos, hortaliças e forrageiras CADUB GHF: a computer program to calculate fertilizer and lime needs for grain crops, horticulture and forages

    Directory of Open Access Journals (Sweden)

    Paulo Ivonir Gubiani

    2007-08-01

    Full Text Available A recomendação da adubação e calagem dos cultivos no Rio Grande do Sul e em Santa Catarina é baseada em dados oficiais da pesquisa, sendo apoiada em resultados de análise de solo, histórico de manejo e na experiência técnica. Alguns softwares têm sido desenvolvidos como ferramenta de auxílio aos técnicos, porém, a recente modificação nas recomendações demanda construção de novos programas computacionais. Esse trabalho descreve um programa computacional que tem como objetivo fazer a recomendação, na forma digital, de fertilizantes e corretivos da acidez do solo a partir de informações contidas no Manual de Adubação e Calagem da CQFS, para as culturas produtoras de grãos, hortaliças e forrageiras. O programa foi desenvolvido em software Microsoft Excel®, apresentando interface principal em Visual Basic for Applications (VBA, e está disponível para download em http://coralx.ufsm.br/solos/cadub2.php. O CADUB GHF fornece as necessidades de nitrogênio, fósforo e potássio (NPK para a adubação de base e cobertura e a necessidade de calcário para as culturas produtoras de grãos, hortaliças e forrageiras. O CADUB GHF gera um laudo contendo os dados fornecidos e calculados, apresentando-o em forma de arquivo com extensão "xls", que pode ser impresso e/ou salvo.The fertilizers and lime recommendation for Rio Grande do Sul and Santa Catarina states is based on official guidelines and is supported on soil testing results, management history and technical experience. There are some softwares developed as a tool to help crop assistants recommend fertilizers, however, the recent modifications in the system demand to build new computer programs to accomplish this. This paper reports how a computer program aimed at making recommendation, in a digital form, of fertilizer and liming based on information suggested by official institutions manual for grain crops, horticulture and forages. The program was developed in Microsoft

  14. Estudo de caso de peça moldada pelo processo de injeção-compressão para termoplásticos utilizando análise computacional Study of injection-compression molded part using CAE analysis

    Directory of Open Access Journals (Sweden)

    Thyago M. Kiam

    2007-03-01

    Full Text Available O processamento de termoplásticos através do processo de injeção representa o principal método de fabricação de peças plásticas. Limitações do processo de injeção convencional, principalmente quanto à matéria-prima e configuração e funcionamento das máquinas disponíveis, tornam inviável a produção de produtos com grande área projetada e pequena espessura, como janelas automotivas e alguns tipos de lentes. Paralelamente, o processo de injeção evolui continuamente e há uma série de novas tecnologias geradas a partir do processo original, dentre elas o processo de injeção-compressão. No presente trabalho, utilizando análise computacional, estudou-se a produção de lentes de policarbonato através de dois processos distintos: injeção convencional e processo de injeção-compressão. A seqüência de estudos envolveu basicamente os seguintes pontos: estudo do padrão de preenchimento com conseqüente otimização do processo de injeção-compressão quanto à formação de linha de emenda; estudo da janela de processo para ambos os casos e comparação de alguns parâmetros principalmente tensão de cisalhamento e força de fechamento, por se tratarem de fatores limitantes na produção de peças com grande área projetada. Os resultados para o caso estudado comprovam grande vantagem na utilização do processo de injeção-compressão.The injection-molding of thermoplastics is the main process used in the production of plastics parts. There are some limitations in the conventional injection process, specially related to raw materials, machines configuration and operation, which hamper fabrication of thin parts with large areas such as car windows and lenses. On the other hand, the process has been improved continuously with several new technologies, going beyond the conventional injection molding process, including the "injection-compression" process. In this paper, using CAE (computer aided engineering technology

  15. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  16. Analysis for Parallel Execution without Performing Hardware/Software Co-simulation

    OpenAIRE

    Muhammad Rashid

    2014-01-01

    Hardware/software co-simulation improves the performance of embedded applications by executing the applications on a virtual platform before the actual hardware is available in silicon. However, the virtual platform of the target architecture is often not available during early stages of the embedded design flow. Consequently, analysis for parallel execution without performing hardware/software co-simulation is required. This article presents an analysis methodology for parallel execution of ...

  17. Hardware based redundant multi-threading inside a GPU for improved reliability

    Science.gov (United States)

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  18. PERANCANGAN APLIKASI SISTEM PAKAR DIAGNOSA KERUSAKAN HARDWARE KOMPUTER METODE FORWARD CHAINING

    Directory of Open Access Journals (Sweden)

    Ali Akbar Rismayadi

    2016-09-01

    Full Text Available Abstract Damage to computer hardware, not a big disaster, because not all damage to computer hardware can not be repaired, nearly all computer users, whether public or institutions often suffer various kinds of damage that occurred in the computer hardware it has, and the damage can be caused by various factors that are basically as the user does not know the cause of what makes the computer hardware used damaged. Therefore, it is necessary to build an application that can help users to mendiganosa damage to computer hardware. So that everyone can diagnose the type of hardware damage his computer. Development of expert system diagnosis of damage to computer hardware uses forward chaining method by promoting alisisis descriptive of various damage data obtained from several experts and other sources of literature to reach a conclusion on the diagnosis of damage. As well as using the waterfall model as a model system development, starting from the analysis stage to stage software needs support. This application is built using a programming language tools Eclipse ADT as well as SQLite as its database. diagnosis expert system damage computer hardware is expected to be used as a tool to help find the causes of damage to computer hardware independently without the help of a computer technician.

  19. A Hybrid Hardware and Software Component Architecture for Embedded System Design

    Science.gov (United States)

    Marcondes, Hugo; Fröhlich, Antônio Augusto

    Embedded systems are increasing in complexity, while several metrics such as time-to-market, reliability, safety and performance should be considered during the design of such systems. A component-based design which enables the migration of its components between hardware and software can cope to achieve such metrics. To enable that, we define hybrid hardware and software components as a development artifact that can be deployed by different combinations of hardware and software elements. In this paper, we present an architecture for developing such components in order to construct a repository of components that can migrate between the hardware and software domains to meet the design system requirements.

  20. Speed challenge: a case for hardware implementation in soft-computing

    Science.gov (United States)

    Daud, T.; Stoica, A.; Duong, T.; Keymeulen, D.; Zebulum, R.; Thomas, T.; Thakoor, A.

    2000-01-01

    For over a decade, JPL has been actively involved in soft computing research on theory, architecture, applications, and electronics hardware. The driving force in all our research activities, in addition to the potential enabling technology promise, has been creation of a niche that imparts orders of magnitude speed advantage by implementation in parallel processing hardware with algorithms made especially suitable for hardware implementation. We review our work on neural networks, fuzzy logic, and evolvable hardware with selected application examples requiring real time response capabilities.

  1. Hardware in the loop platform development for hybrid vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Wilhelm, E. [ETH Zurich, Zurich (Switzerland); Fowler, E.; Stevens, M.B. [Waterloo Univ., ON (Canada). Dept. of Chemical Engineering; Fraser, M.W. [Waterloo Univ., ON (Canada). Dept. of Mechanical Engineering

    2007-07-01

    This paper described a hardware-in-the-loop (HIL) validation simulation system designed to evaluate hybrid control strategies. The system was designed to reduce development costs and improve the safety of hybrid vehicle control systems. Model-based design processes for power trains typically include a series of processes to assess the real time and physical limitations of control systems prior to in-vehicle testing. The study used a 70 kW nickel metal hydride battery; a 67 kW 3-phase induction traction motor; and, a high voltage DC-DC converter within a fuel cell Chevrolet Equinox. Two physical vehicle controllers were used to interface with the virtual vehicle simulation in real time. System performance was monitored with a supervisory computer. A software in the loop (SIL) process was conducted to assess torque control and regenerative braking algorithm validation. An analysis of the controller code showed that a Simulink-native integrator block was updating too slowly. A custom integration term calculation was written. The charge control was then validated and tuned. It was concluded that use of the HIL system mitigated the risk of component damage through the identification and correction of unstable control logic. 10 refs., 2 tabs., 10 figs.

  2. The Hardware Topological Trigger of ATLAS: Commissioning and Operations

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226165; The ATLAS collaboration

    2018-01-01

    The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system with an output rate of 100 kHz and decision latency smaller than 2.5 μs. It consists of a calorimeter trigger, muon trigger and a central trigger processor. To improve the physics potential reach in ATLAS, during the LHC shutdown after Run 1, the Level-1 trigger system was upgraded at hardware, firmware and software level. In particular, a new electronics sub-system was introduced in the real-time data processing path: the Topological Processor System (L1Topo). It consists of a single AdvancedCTA shelf equipped with two Level-1 topological processor blades. For individual blades, real-time information from calorimeter and muon Level-1 trigger systems, is processed by four individual state-of-the-art FPGAs. It needs to deal with a large input bandwidth of up to 6 Tb/s, optical connectivity and low processing latency on the real-time data path. The L1Topo firmware apply measurements of angles between jets and/or leptons and several...

  3. Hardware-in-the-Loop emulator for a hydrokinetic turbine

    Science.gov (United States)

    Rat, C. L.; Prostean, O.; Filip, I.

    2018-01-01

    Hydroelectric power has proven to be an efficient and reliable form of renewable energy, but its impact on the environment has long been a source of concern. Hydrokinetic turbines are an emerging class of renewable energy technology designed for deployment in small rivers and streams with minimal environmental impact on the local ecosystem. Hydrokinetic technology represents a truly clean source of energy, having the potential to become a highly efficient method of harvesting renewable energy. However, in order to achieve this goal, extensive research is necessary. This paper presents a Hardware-in-the-Loop emulator for a run-of-the-river type hydrokinetic turbine. The HIL system uses an ABB ACS800 drive to control an induction machine as a significant means of replicating the behavior of the real turbine. The induction machine is coupled to a permanent magnet synchronous generator and the corresponding load. The ACS800 drive is controlled through the software system, which comprises of the hydrokinetic turbine real-time simulation through mathematical modeling in the LabVIEW programming environment running on a NI CompactRIO (cRIO) platform. The advantages of this method are that it can provide a means for testing many control configurations without requiring the presence of the real turbine. This paper contains the basic principles of a hydrokinetic turbine, particularly the run-of-the-river configurations along with the experimental results obtained from the HIL system.

  4. UniBoard: generic hardware for radio astronomy signal processing

    Science.gov (United States)

    Hargreaves, J. E.

    2012-09-01

    UniBoard is a generic high-performance computing platform for radio astronomy, developed as a Joint Research Activity in the RadioNet FP7 Programme. The hardware comprises eight Altera Stratix IV Field Programmable Gate Arrays (FPGAs) interconnected by a high speed transceiver mesh. Each FPGA is connected to two DDR3 memory modules and three external 10Gbps ports. In addition, a total of 128 low voltage differential input lines permit connection to external ADC cards. The DSP capability of the board exceeds 644E9 complex multiply-accumulate operations per second. The first production run of eight boards was distributed to partners in The Netherlands, France, Italy, UK, China and Korea in May 2011, with a further production runs completed in December 2011 and early 2012. The function of the board is determined by the firmware loaded into its FPGAs. Current applications include beamformers, correlators, digital receivers, RFI mitigation for pulsar astronomy, and pulsar gating and search machines The new UniBoard based correlator for the European VLBI network (EVN) uses an FX architecture with half the resources of the board devoted to station based processing: delay and phase correction and channelization, and half to the correlation function. A single UniBoard can process a 64MHz band from 32 stations, 2 polarizations, sampled at 8 bit. Adding more UniBoards can expand the total bandwidth of the correlator. The design is able to process both prerecorded and real time (eVLBI) data.

  5. ORELA data acquisition system hardware. Volume 1: introduction

    International Nuclear Information System (INIS)

    Reynolds, J.W.

    1977-01-01

    The Oak Ridge Electron Linear Accelerator Facility (ORELA) has been specifically designed as a facility for neutron cross-section measurements by the time-of-flight technique. ORELA was designed so that a number of cross-section experiments can be performed simultaneously. This goal of simultaneous operation of several experiments, a maximum of six to date, has been achieved by using the multiple flight paths radiating from the target room, the multiple flight stations on each flight path, the laboratory facilities surrounding the central data area, and a shared data acquisition computer system. The flight stations contain the fast electronics for initial processing of the nuclear detector signals on a time scale of nanoseconds. The laboratories, and in some cases the flight stations, contain the equipment to digitize the nanosecond detector signals on a time scale of a few microseconds. At this point, the data passes into the ORELA Data Acquisition portion of the ORELA Data Handling System. An introduction to the ORELA Data Acquisition System is given, and the component parts of the system are briefly reviewed. Each specifically designed piece of hardware is briefly described with a simplified block diagram. Modifications to standard peripheral devices are reviewed. A list of drawings and programming notes are also included

  6. ATLAS TileCal LVPS Upgrade Hardware and Testing

    CERN Document Server

    Hibbard, Michael James; The ATLAS collaboration; Hadavand, Haleh Khani

    2018-01-01

    UTA (University of Texas at Arlington) has been designing and producing new testing stations to ensure the reliability and quality of new TileLVPS (Low Voltage Power Supplies), also produced at UTA, which will power the next generation of upgraded hardware in the TileCal (Tile Calorimeter) system of ATLAS at CERN. UTA has produced two new types of testing stations, which build upon the previous generation of testing stations used in the initial production of the TileCal system. The first station is the Initial Test Station, and quickly quantifies a multitude of performance metrics of a LVPS. We have developed our own PC based program which graphically display and records onto file these metrics. A few notable metrics we are measuring are the system clock and its jitter. Excessive clock jitter in LVPS can affect system stability and derate the working range of the system duty cycle. This station also verifies protection circuitry of LVPS, which protects it from over temperature, current and voltage. The second...

  7. Hardware implementation of the ORNL fissile mass flow monitor

    International Nuclear Information System (INIS)

    McEvers, J.; Sumner, J.; Jones, R.; Ferrell, R.; Martin, C.; Uckan, T.; March-Leuba, J.

    1998-01-01

    This paper provides an overall description of the implementation of the Oak Ridge National Laboratory (ORNL) Fissile Mass Flow Monitor, which is part of a Blend Down Monitoring System (BDMS) developed by the US Department of Energy (DOE). The Fissile Mass Flow Monitor is designed to measure the mass flow of fissile material through a gaseous or liquid process stream. It consists of a source-modulator assembly, a detector assembly, and a cabinet that houses all control, data acquisition, and supporting electronics equipment. The development of this flow monitor was first funded by DOE/NE in September 95, and an initial demonstration by ORNL was described in previous INMM meetings. This methodology was chosen by DOE/NE for implementation in November 1996, and the hardware/software development is complete. Successful BDMS installation and operation of the complete BDMS has been demonstrated in the Paducah Gaseous Diffusion Plant (PGDP), which is operated by Lockheed Martin Utility Services, Inc. for the US Enrichment Corporation and regulated by the Nuclear Regulatory Commission. Equipment for two BDMS units has been shipped to the Russian Federation

  8. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, M; The ATLAS collaboration

    2011-01-01

    In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. The LHC will soon operate at a center-of-mass energy of 14 TeV and at high instantaneous luminosities of the order of $10^{34}$ to $10^{35}$ cm$^{-2}$ s$^{-1}$. A multi-level trigger strategy is used in ATLAS, with the first level (LVL1) implemented in hardware and the second and third levels (LVL2 and EF) implemented in a large computer farm. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at LVL2 is an important element to achieve these needs. As the instantaneous lumino...

  9. Commodity hardware and open source solutions in FTU data management

    International Nuclear Information System (INIS)

    Centioli, C.; Bracco, G.; Eccher, S.; Iannone, F.; Maslennikov, A.; Panella, M.; Vitale, V.

    2004-01-01

    Frascati Tokamak Upgrade (FTU) data management system underwent several developments in the last year, mainly due to the availability of huge amount of open source software and cheap commodity hardware. First of all, we replaced the old and expensive four SUN/SOLARIS servers running AFS (Andrew File System) fusione.it cell with three SuperServer Supermicro SC-742. Secondly Linux 2.4 OS has been installed on our new cell servers and OpenAFS 1.2.8 open source distributed file system has replaced the commercial IBM/Transarc AFS. A pioneering solution - SGI's XFS file system for Linux - has been adopted to format one terabyte of FTU storage system on which the AFS volumes are based. Benchmark tests have shown the good performances of XFS compared to the classical ext3 Linux file system. Third, the data access software has been ported to Linux, together with the interfaces to Matlab and IDL, as well as the locally developed data display utility, SHOX. Finally a new Object-Oriented Data Model (OODM) has been developed for FTU shots data to build and maintain a FTU data warehouse (DW). FTU OODM has been developed using ROOT, an object oriented data analysis framework well-known in high energy physics. Since large volumes of data are involved, a parallel data extraction process, developed in the ROOT framework, has been implemented taking advantage of the AFS distributed environment of FTU computing system

  10. Astronauts Prepare for Mission With Virtual Reality Hardware

    Science.gov (United States)

    2001-01-01

    Astronauts John M. Grunsfeld (left), STS-109 payload commander, and Nancy J. Currie, mission specialist, use the virtual reality lab at Johnson Space Center to train for upcoming duties aboard the Space Shuttle Columbia. This type of computer interface paired with virtual reality training hardware and software helps to prepare the entire team to perform its duties for the fourth Hubble Space Telescope Servicing mission. The most familiar form of virtual reality technology is some form of headpiece, which fits over your eyes and displays a three dimensional computerized image of another place. Turn your head left and right, and you see what would be to your sides; turn around, and you see what might be sneaking up on you. An important part of the technology is some type of data glove that you use to propel yourself through the virtual world. Currently, the medical community is using the new technologies in four major ways: To see parts of the body more accurately, for study, to make better diagnosis of disease and to plan surgery in more detail; to obtain a more accurate picture of a procedure during surgery; to perform more types of surgery with the most noninvasive, accurate methods possible; and to model interactions among molecules at a molecular level.

  11. Astronaut Prepares for Mission With Virtual Reality Hardware

    Science.gov (United States)

    2001-01-01

    Astronaut John M. Grunsfeld, STS-109 payload commander, uses virtual reality hardware at Johnson Space Center to rehearse some of his duties prior to the STS-109 mission. The most familiar form of virtual reality technology is some form of headpiece, which fits over your eyes and displays a three dimensional computerized image of another place. Turn your head left and right, and you see what would be to your sides; turn around, and you see what might be sneaking up on you. An important part of the technology is some type of data glove that you use to propel yourself through the virtual world. This technology allows NASA astronauts to practice International Space Station work missions in advance. Currently, the medical community is using the new technologies in four major ways: To see parts of the body more accurately, for study, to make better diagnosis of disease and to plan surgery in more detail; to obtain a more accurate picture of a procedure during surgery; to perform more types of surgery with the most noninvasive, accurate methods possible; and to model interactions among molecules at a molecular level.

  12. Commodity hardware and open source solutions in FTU data management

    Energy Technology Data Exchange (ETDEWEB)

    Centioli, C. E-mail: centioli@frascati.enea.it; Bracco, G.; Eccher, S.; Iannone, F.; Maslennikov, A.; Panella, M.; Vitale, V

    2004-06-01

    Frascati Tokamak Upgrade (FTU) data management system underwent several developments in the last year, mainly due to the availability of huge amount of open source software and cheap commodity hardware. First of all, we replaced the old and expensive four SUN/SOLARIS servers running AFS (Andrew File System) fusione.it cell with three SuperServer Supermicro SC-742. Secondly Linux 2.4 OS has been installed on our new cell servers and OpenAFS 1.2.8 open source distributed file system has replaced the commercial IBM/Transarc AFS. A pioneering solution - SGI's XFS file system for Linux - has been adopted to format one terabyte of FTU storage system on which the AFS volumes are based. Benchmark tests have shown the good performances of XFS compared to the classical ext3 Linux file system. Third, the data access software has been ported to Linux, together with the interfaces to Matlab and IDL, as well as the locally developed data display utility, SHOX. Finally a new Object-Oriented Data Model (OODM) has been developed for FTU shots data to build and maintain a FTU data warehouse (DW). FTU OODM has been developed using ROOT, an object oriented data analysis framework well-known in high energy physics. Since large volumes of data are involved, a parallel data extraction process, developed in the ROOT framework, has been implemented taking advantage of the AFS distributed environment of FTU computing system.

  13. Facilitating preemptive hardware system design using partial reconfiguration techniques.

    Science.gov (United States)

    Dondo Gazzano, Julio; Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration.

  14. Hardware Implementation of Artificial Neural Network for Data Ciphering

    Directory of Open Access Journals (Sweden)

    Sahar L. Kadoory

    2016-10-01

    Full Text Available This paper introduces the design and realization of multiple blocks ciphering techniques on the FPGA (Field Programmable Gate Arrays. A back propagation neural networks have been built for substitution, permutation and XOR blocks ciphering using Neural Network Toolbox in MATLAB program. They are trained to encrypt the data, after obtaining the suitable weights, biases, activation function and layout. Afterward, they are described using VHDL and implemented using Xilinx Spartan-3E FPGA using two approaches: serial and parallel versions. The simulation results obtained with Xilinx ISE 9.2i software. The numerical precision is chosen carefully when implementing the Neural Network on FPGA. Obtained results from the hardware designs show accurate numeric values to cipher the data. As expected, the synthesis results indicate that the serial version requires less area resources than the parallel version. As, the data throughput in parallel version is higher than the serial version in rang between (1.13-1.5 times. Also, a slight difference can be observed in the maximum frequency.

  15. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 14TeV and instantaneous luminosities which could exceed 10^34 interactions per cm^2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (L1) is hardware based and the second (L2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the highest instantane...

  16. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 13 or 14 TeV and instantaneous luminosities which could exceed 1034 interactions per cm2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (Level-1) is hardware based and the second (Level-2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the hig...

  17. FTK: A Hardware Track Finder for the ATLAS Trigger System

    CERN Document Server

    Tompkins, L; The ATLAS collaboration

    2013-01-01

    The LHC experiments are preparing for instantaneous luminosities above $1 imes 10^{34} cm^{-2}s^{-1}$ as early as 2015. In order to select the rare events of interest in such dense environments, detailed event information is necessary. In particular, the highly granular single particle information of tracking detectors is crucial for the selection of isolated leptons, taus and b-jets in the face of large vertex multiplicities. We report on the developement of the ATLAS FastTracker (FTK), a hardware based track finder which will reconstruct all tracks with a momentum greater than 1 GeV/c up to luminosties of $3 imes 10^{34} cm^{-2}s^{-1}$ at an event input rate of 100 kHz and a latency of a few hundred microseconds. The track information will be available to the Level 2 processors at the beginning of event processing. Significant progress towards a phased installation beginning in 2015 has been achieved. A pre-prototype of the pattern recognition board is taking data in the fall of 2012 and prototypes for all ...

  18. Towards truly integrated hardware fusion with PET/CT

    International Nuclear Information System (INIS)

    Beyer, T.

    2005-01-01

    Combined PET/CT imaging is a non-invasive means of acquiring and reviewing both, the anatomy and the molecular pathways of a patient during a quasi-simultaneous examination. Since the introduction of the prototype PET/CT in 1998 this imaging technology has evolved rapidly. State-of-the-art PET/CT tomographs combine the latest technology in spiral, multi-slice CT and PET using novel scintillator materials and image reconstruction techniques. Together with novel patient positioning systems PET/CT tomographs allow to acquire complementary PET and CT data in a single exam with the best intrinsic co-registration. In addition to the hardware integration efforts have been made to integrate the acquisition and viewing software in PET/CT, thus making the diagnostic review and reporting more efficient. Based on the first clinical experiences and the technical evolution of combined imaging technology PET/CT has become a standard in diagnostic oncology. With high-performance imaging technology at hand today, standardized, high-quality PET/CT imaging protocols are needed to provide best oncology patient care. These protocols mandate the joint efforts of a multi-disciplinary team of physicians, physicists and radiochemists. (orig.)

  19. A Fast hardware tracker for the ATLAS Trigger

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. To achieve high background rejection while maintaining good efficiency for interesting physics signals, sophisticated algorithms are needed which require extensive use of tracking information. The Fast TracKer (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform track-finding at 100 kHz and based on a mixture of advanced technologies. Modern, powerful Field Programmable Gate Arrays (FPGA) form an important part of the system architecture, and the combinatorial problem of pattern recognition is solved by ~8000 standard-cell ASICs named Associative Memories. The availability of the tracking and subsequent vertex information within a short latency ensures robust selections and allows improved trigger performance for the most difficult sign...

  20. A Fast hardware Tracker for the ATLAS Trigger system

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. After a very successful data taking run the LHC is expected to run starting in 2015 with much higher instantaneous luminosities and this will increase the load on the High Level Trigger system. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals, which requires a more extensive use of tracking information. The Fast Tracker (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform full-scan track-finding at the event rate of 100 kHz. FTK is a dedicated processor based on a mixture of advanced technologies. Modern, powerful, Field Programmable Gate Arrays form an important part of the system architecture, and the combinatorial problem of pattern r...