WorldWideScience

Sample records for high level architecture

  1. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  2. EAP high-level product architecture

    DEFF Research Database (Denmark)

    Guðlaugsson, Tómas Vignir; Mortensen, Niels Henrik; Sarban, Rahimullah

    2013-01-01

    EAP technology has the potential to be used in a wide range of applications. This poses the challenge to the EAP component manufacturers to develop components for a wide variety of products. Danfoss Polypower A/S is developing an EAP technology platform, which can form the basis for a variety...... of EAP technology products while keeping complexity under control. High level product architecture has been developed for the mechanical part of EAP transducers, as the foundation for platform development. A generic description of an EAP transducer forms the core of the high level product architecture...... the function of the EAP transducers to be changed, by basing the EAP transducers on a different combination of organ alternatives. A model providing an overview of the high level product architecture has been developed to support daily development and cooperation across development teams. The platform approach...

  3. Service Oriented Architecture for High Level Applications

    International Nuclear Information System (INIS)

    Chu, P.

    2012-01-01

    Standalone high level applications often suffer from poor performance and reliability due to lengthy initialization, heavy computation and rapid graphical update. Service-oriented architecture (SOA) is trying to separate the initialization and computation from applications and to distribute such work to various service providers. Heavy computation such as beam tracking will be done periodically on a dedicated server and data will be available to client applications at all time. Industrial standard service architecture can help to improve the performance, reliability and maintainability of the service. Robustness will also be improved by reducing the complexity of individual client applications.

  4. An emergency management demonstrator using the high level architecture

    International Nuclear Information System (INIS)

    Williams, R.J.

    1996-12-01

    This paper addresses the issues of simulation interoperability within the emergency management training context. A prototype implementation in Java of a subset of the High Level Architecture (HLA) is described. The use of Web Browsers to provide graphical user interfaces to HLA is also investigated. (au)

  5. The Software Architecture of the LHCb High Level Trigger

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The LHCb experiment is a spectrometer dedicated to the study of heavy flavor at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but disk space limitations mean that only 3 kHz can be written to tape for offline processing. For this reason the LHCb data acquisition system -- trigger -- plays a key role in selecting signal events and rejecting background. In contrast to previous experiments at hadron colliders like for example CDF or D0, the bulk of the LHCb trigger is implemented in software and deployed on a farm of 20k parallel processing nodes. This system, called the High Level Trigger (HLT) is responsible for reducing the rate from the maximum at which the detector can be read out, 1.1 MHz, to the 3 kHz which can be processed offline,and has 20 ms in which to process and accept/reject each event. In order to minimize systematic uncertainties, the HLT was designed from the outset to reuse the offline reconstruction and selection code, and is based around multiple independent and redunda...

  6. A High-Level Functional Architecture for GNSS-Based Road Charging Systems

    DEFF Research Database (Denmark)

    Zabic, Martina

    2011-01-01

    , a short introduction is provided followed by a presentation of the system engineering methodology to illustrate how and why system architectures can be beneficial for GNSS-based road charging systems. Hereafter, a basic set of system functions is determined based on functional system requirements, which...... charging systems, it is important to highlight the overall system architecture which is the framework that defines the basic functions and important concepts of the system. This paper presents a functional architecture for GNSS-based road charging systems based on the concepts of system engineering. First...... defines the necessary tasks that these systems must accomplish. Finally, this paper defines the system functionalities; and provides a generic high-level functional architecture for GNSS-based road charging systems....

  7. Towards Implementation of a Generalized Architecture for High-Level Quantum Programming Language

    Science.gov (United States)

    Ameen, El-Mahdy M.; Ali, Hesham A.; Salem, Mofreh M.; Badawy, Mahmoud

    2017-08-01

    This paper investigates a novel architecture to the problem of quantum computer programming. A generalized architecture for a high-level quantum programming language has been proposed. Therefore, the programming evolution from the complicated quantum-based programming to the high-level quantum independent programming will be achieved. The proposed architecture receives the high-level source code and, automatically transforms it into the equivalent quantum representation. This architecture involves two layers which are the programmer layer and the compilation layer. These layers have been implemented in the state of the art of three main stages; pre-classification, classification, and post-classification stages respectively. The basic building block of each stage has been divided into subsequent phases. Each phase has been implemented to perform the required transformations from one representation to another. A verification process was exposed using a case study to investigate the ability of the compiler to perform all transformation processes. Experimental results showed that the efficacy of the proposed compiler achieves a correspondence correlation coefficient about R ≈ 1 between outputs and the targets. Also, an obvious achievement has been utilized with respect to the consumed time in the optimization process compared to other techniques. In the online optimization process, the consumed time has increased exponentially against the amount of accuracy needed. However, in the proposed offline optimization process has increased gradually.

  8. FPGA-Based Channel Coding Architectures for 5G Wireless Using High-Level Synthesis

    Directory of Open Access Journals (Sweden)

    Swapnil Mhaske

    2017-01-01

    Full Text Available We propose strategies to achieve a high-throughput FPGA architecture for quasi-cyclic low-density parity-check codes based on circulant-1 identity matrix construction. By splitting the node processing operation in the min-sum approximation algorithm, we achieve pipelining in the layered decoding schedule without utilizing additional hardware resources. High-level synthesis compilation is used to design and develop the architecture on the FPGA hardware platform. To validate this architecture, an IEEE 802.11n compliant 608 Mb/s decoder is implemented on the Xilinx Kintex-7 FPGA using the LabVIEW FPGA Compiler in the LabVIEW Communication System Design Suite. Architecture scalability was leveraged to accomplish a 2.48 Gb/s decoder on a single Xilinx Kintex-7 FPGA. Further, we present rapidly prototyped experimentation of an IEEE 802.16 compliant hybrid automatic repeat request system based on the efficient decoder architecture developed. In spite of the mixed nature of data processing—digital signal processing and finite-state machines—LabVIEW FPGA Compiler significantly reduced time to explore the system parameter space and to optimize in terms of error performance and resource utilization. A 4x improvement in the system throughput, relative to a CPU-based implementation, was achieved to measure the error-rate performance of the system over large, realistic data sets using accelerated, in-hardware simulation.

  9. DART: A Functional-Level Reconfigurable Architecture for High Energy Efficiency

    Directory of Open Access Journals (Sweden)

    David Raphaël

    2008-01-01

    Full Text Available Abstract Flexibility becomes a major concern for the development of multimedia and mobile communication systems, as well as classical high-performance and low-energy consumption constraints. The use of general-purpose processors solves flexibility problems but fails to cope with the increasing demand for energy efficiency. This paper presents the DART architecture based on the functional-level reconfiguration paradigm which allows a significant improvement in energy efficiency. DART is built around a hierarchical interconnection network allowing high flexibility while keeping the power overhead low. To enable specific optimizations, DART supports two modes of reconfiguration. The compilation framework is built using compilation and high-level synthesis techniques. A 3G mobile communication application has been implemented as a proof of concept. The energy distribution within the architecture and the physical implementation are also discussed. Finally, the VLSI design of a 0.13  m CMOS SoC implementing a specialized DART cluster is presented.

  10. An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation

    Science.gov (United States)

    Reid, Michael R.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.

  11. DART: A Functional-Level Reconfigurable Architecture for High Energy Efficiency

    Directory of Open Access Journals (Sweden)

    Sébastien Pillement

    2007-12-01

    Full Text Available Flexibility becomes a major concern for the development of multimedia and mobile communication systems, as well as classical high-performance and low-energy consumption constraints. The use of general-purpose processors solves flexibility problems but fails to cope with the increasing demand for energy efficiency. This paper presents the DART architecture based on the functional-level reconfiguration paradigm which allows a significant improvement in energy efficiency. DART is built around a hierarchical interconnection network allowing high flexibility while keeping the power overhead low. To enable specific optimizations, DART supports two modes of reconfiguration. The compilation framework is built using compilation and high-level synthesis techniques. A 3G mobile communication application has been implemented as a proof of concept. The energy distribution within the architecture and the physical implementation are also discussed. Finally, the VLSI design of a 0.13 μm CMOS SoC implementing a specialized DART cluster is presented.

  12. High-Level Design Space and Flexibility Exploration for Adaptive, Energy-Efficient WCDMA Channel Estimation Architectures

    Directory of Open Access Journals (Sweden)

    Zoltán Endre Rákossy

    2012-01-01

    Full Text Available Due to the fast changing wireless communication standards coupled with strict performance constraints, the demand for flexible yet high-performance architectures is increasing. To tackle the flexibility requirement, software-defined radio (SDR is emerging as an obvious solution, where the underlying hardware implementation is tuned via software layers to the varied standards depending on power-performance and quality requirements leading to adaptable, cognitive radio. In this paper, we conduct a case study for representatives of two complexity classes of WCDMA channel estimation algorithms and explore the effect of flexibility on energy efficiency using different implementation options. Furthermore, we propose new design guidelines for both highly specialized architectures and highly flexible architectures using high-level synthesis, to enable the required performance and flexibility to support multiple applications. Our experiments with various design points show that the resulting architectures meet the performance constraints of WCDMA and a wide range of options are offered for tuning such architectures depending on power/performance/area constraints of SDR.

  13. High-Level Heteroatom Doped Two-Dimensional Carbon Architectures for Highly Efficient Lithium-Ion Storage

    Directory of Open Access Journals (Sweden)

    Zhijie Wang

    2018-04-01

    Full Text Available In this work, high-level heteroatom doped two-dimensional hierarchical carbon architectures (H-2D-HCA are developed for highly efficient Li-ion storage applications. The achieved H-2D-HCA possesses a hierarchical 2D morphology consisting of tiny carbon nanosheets vertically grown on carbon nanoplates and containing a hierarchical porosity with multiscale pore size. More importantly, the H-2D-HCA shows abundant heteroatom functionality, with sulfur (S doping of 0.9% and nitrogen (N doping of as high as 15.5%, in which the electrochemically active N accounts for 84% of total N heteroatoms. In addition, the H-2D-HCA also has an expanded interlayer distance of 0.368 nm. When used as lithium-ion battery anodes, it shows excellent Li-ion storage performance. Even at a high current density of 5 A g−1, it still delivers a high discharge capacity of 329 mA h g−1 after 1,000 cycles. First principle calculations verifies that such unique microstructure characteristics and high-level heteroatom doping nature can enhance Li adsorption stability, electronic conductivity and Li diffusion mobility of carbon nanomaterials. Therefore, the H-2D-HCA could be promising candidates for next-generation LIB anodes.

  14. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    Science.gov (United States)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  15. Innovation in high-level capture and diffusion of tacit architectural knowledge

    OpenAIRE

    Burry, Mark

    2017-01-01

    This paper focusses on an ‘Embedded Doctoral Design Program’ (EDDP), comprising a cohort of design PhD candidates who are ‘embedded’ outside the university for a substantial part of their candidature. Specifically, the paper details the framework and outcome for Australian doctoral candidates in architecture placed in contexts outside their experience and immediate expertise, and outside the traditional academic research setting. These contexts can be drawn potentially from professional pract...

  16. High-level specification of a proposed information architecture for support of a bioterrorism early-warning system.

    Science.gov (United States)

    Berkowitz, Murray R

    2013-01-01

    Current information systems for use in detecting bioterrorist attacks lack a consistent, overarching information architecture. An overview of the use of biological agents as weapons during a bioterrorist attack is presented. Proposed are the design, development, and implementation of a medical informatics system to mine pertinent databases, retrieve relevant data, invoke appropriate biostatistical and epidemiological software packages, and automatically analyze these data. The top-level information architecture is presented. Systems requirements and functional specifications for this level are presented. Finally, future studies are identified.

  17. An architectural approach to level design

    CERN Document Server

    Totten, Christopher W

    2014-01-01

    Explore Level Design through the Lens of Architectural and Spatial Experience TheoryWritten by a game developer and professor trained in architecture, An Architectural Approach to Level Design is one of the first books to integrate architectural and spatial design theory with the field of level design. It explores the principles of level design through the context and history of architecture, providing information useful to both academics and game development professionals.Understand Spatial Design Principles for Game Levels in 2D, 3D, and Multiplayer ApplicationsThe book presents architectura

  18. Using High-Level RTOS Models for HW/SW Embedded Architecture Exploration: Case Study on Mobile Robotic Vision

    Directory of Open Access Journals (Sweden)

    Verdier François

    2008-01-01

    Full Text Available Abstract We are interested in the design of a system-on-chip implementing the vision system of a mobile robot. Following a biologically inspired approach, this vision architecture belongs to a larger sensorimotor loop. This regulation loop both creates and exploits dynamics properties to achieve a wide variety of target tracking and navigation objectives. Such a system is representative of numerous flexible and dynamic applications which are more and more encountered in embedded systems. In order to deal with all of the dynamic aspects of these applications, it appears necessary to embed a dedicated real-time operating system on the chip. The presence of this on-chip custom executive layer constitutes a major scientific obstacle in the traditional hardware and software design flows. Classical exploration and simulation tools are particularly inappropriate in this case. We detail in this paper the specific mechanisms necessary to build a high-level model of an embedded custom operating system able to manage such a real-time but flexible application. We also describe our executable RTOS model written in SystemC allowing an early simulation of our application on top of its specific scheduling layer. Based on this model, a methodology is discussed and results are given on the exploration and validation of a distributed platform adapted to this vision system.

  19. Architectural-level power estimation and experimentation

    Science.gov (United States)

    Ye, Wu

    With the emergence of a plethora of embedded and portable applications and ever increasing integration levels, power dissipation of integrated circuits has moved to the forefront as a design constraint. Recent years have also seen a significant trend towards designs starting at the architectural (or RT) level. Those demand accurate yet fast RT level power estimation methodologies and tools. This thesis addresses issues and experiments associate with architectural level power estimation. An execution driven, cycle-accurate RT level power simulator, SimplePower, was developed using transition-sensitive energy models. It is based on the architecture of a five-stage pipelined RISC datapath for both 0.35mum and 0.8mum technology and can execute the integer subset of the instruction set of SimpleScalar . SimplePower measures the energy consumed in the datapath, memory and on-chip buses. During the development of SimplePower , a partitioning power modeling technique was proposed to model the energy consumed in complex functional units. The accuracy of this technique was validated with HSPICE simulation results for a register file and a shifter. A novel, selectively gated pipeline register optimization technique was proposed to reduce the datapath energy consumption. It uses the decoded control signals to selectively gate the data fields of the pipeline registers. Simulation results show that this technique can reduce the datapath energy consumption by 18--36% for a set of benchmarks. A low-level back-end compiler optimization, register relabeling, was applied to reduce the on-chip instruction cache data bus switch activities. Its impact was evaluated by SimplePower. Results show that it can reduce the energy consumed in the instruction data buses by 3.55--16.90%. A quantitative evaluation was conducted for the impact of six state-of-art high-level compilation techniques on both datapath and memory energy consumption. The experimental results provide a valuable insight for

  20. Optical transmission of low-level signals with high dynamic range using the optically-coupled current-mirror architecture

    Energy Technology Data Exchange (ETDEWEB)

    Camin, Daniel V. [Dipartimento di Fisica dell' Universita degli Studi di Milano and INFN, Milan (Italy)]. E-mail: Daniel.Victor.Camin@mi.infn.it; Grassi, Valerio [Dipartimento di Fisica dell' Universita degli Studi di Milano and INFN, Milan (Italy); De Donato, Cinzia [Dipartimento di Fisica dell' Universita degli Studi di Milano and INFN, Milan (Italy)

    2007-03-01

    In this paper we illustrate the application of a novel circuit architecture, the Optically-Coupled Current-Mirror (OCCM), conceived for the linear transmission of analogue signals via fibre optics. We installed 880 OCCMs in the PMTs of the first two telescopes of the cosmic-ray experiment Pierre Auger. The Pierre Auger Observatory (PAO) has been designed to increase the statistics of cosmic-rays with energies above 10{sup 18}eV. Two different techniques have been adopted: the Surface Detector (SD) modules that comprise 1600 tanks spaced each other by 1.5km within an area of 3000km{sup 2}. On the other side there are four buildings, the Optical Stations (OS), in which six telescopes are installed in each one of the four OS, at the periphery of the site, looking inwards. The telescopes are sensitive to the UV light created at the moment a high-energy shower develops in the atmosphere and is within the field-of-view (FOV) of the telescopes. The PAO is located in the Northern Patagonia, not far from the Cordillera de Los Andes, in Argentina. Both detector types, FD telescopes and SD modules, are sensitive to the UV light resulting from the interaction of high-energy particles and the nitrogen molecules in the atmosphere. The UV-sensitive telescopes operate only at night when the sky is completely dark. Otherwise, the light collected by the telescopes may give origin to severe damage in particular if those telescopes point at twilight or to artificial light sources. The duty cycle of the telescope's operation is therefore limited to about 10% or slightly more than that, if data are taken also when there is a partial presence of the Moon. The SD modules establish, independently of the telescopes, the geometry of the event. At the same time a shower reconstruction is performed using the telescope's data, independently of the SD modules. Use of both sets of data, taken by the FD telescopes and by the SD modules, allows the hybrid reconstruction that significantly

  1. Optical transmission of low-level signals with high dynamic range using the optically-coupled current-mirror architecture

    International Nuclear Information System (INIS)

    Camin, Daniel V.; Grassi, Valerio; De Donato, Cinzia

    2007-01-01

    In this paper we illustrate the application of a novel circuit architecture, the Optically-Coupled Current-Mirror (OCCM), conceived for the linear transmission of analogue signals via fibre optics. We installed 880 OCCMs in the PMTs of the first two telescopes of the cosmic-ray experiment Pierre Auger. The Pierre Auger Observatory (PAO) has been designed to increase the statistics of cosmic-rays with energies above 10 18 eV. Two different techniques have been adopted: the Surface Detector (SD) modules that comprise 1600 tanks spaced each other by 1.5km within an area of 3000km 2 . On the other side there are four buildings, the Optical Stations (OS), in which six telescopes are installed in each one of the four OS, at the periphery of the site, looking inwards. The telescopes are sensitive to the UV light created at the moment a high-energy shower develops in the atmosphere and is within the field-of-view (FOV) of the telescopes. The PAO is located in the Northern Patagonia, not far from the Cordillera de Los Andes, in Argentina. Both detector types, FD telescopes and SD modules, are sensitive to the UV light resulting from the interaction of high-energy particles and the nitrogen molecules in the atmosphere. The UV-sensitive telescopes operate only at night when the sky is completely dark. Otherwise, the light collected by the telescopes may give origin to severe damage in particular if those telescopes point at twilight or to artificial light sources. The duty cycle of the telescope's operation is therefore limited to about 10% or slightly more than that, if data are taken also when there is a partial presence of the Moon. The SD modules establish, independently of the telescopes, the geometry of the event. At the same time a shower reconstruction is performed using the telescope's data, independently of the SD modules. Use of both sets of data, taken by the FD telescopes and by the SD modules, allows the hybrid reconstruction that significantly improves the data

  2. Immobilized High-Level Waste (HLW) Interim Storage Alternative Generation and analysis and Decision Report - second Generation Implementing Architecture

    International Nuclear Information System (INIS)

    CALMUS, R.B.

    2000-01-01

    Two alternative approaches were previously identified to provide second-generation interim storage of Immobilized High-Level Waste (IHLW). One approach was retrofit modification of the Fuel and Materials Examination Facility (FMEF) to accommodate IHLW. The results of the evaluation of the FMEF as the second-generation IHLW interim storage facility and subsequent decision process are provided in this document

  3. Proceedings of the International Workshop on High-Level Language Computer Architecture, May 26-28, 1980, Fort Lauderdale, Florida

    Science.gov (United States)

    1980-06-01

    primitive, speed languages, all of which contribute negatively to increases approaching that of a ’truly’ high-level the economics of data processing...for the direct dlude vector tunctiona. it was indicated that the processor, it was shoam that 2816 speed gain over maximum MPU per formance is 22 "eA...strictly ’ local. economical way of changing access lists, since it 147 .. . ..".’ ’. .".". . I, rv -’ M WJ, 11 1 P is done at zero cost in conjunction

  4. The ATLAS Level-1 Calorimeter Trigger Architecture

    CERN Document Server

    Garvey, J; Mahout, G; Moye, T H; Staley, R J; Watkins, P M; Watson, A T; Achenbach, R; Hanke, P; Kluge, E E; Meier, K; Meshkov, P; Nix, O; Penno, K; Schmitt, K; Ay, Cc; Bauss, B; Dahlhoff, A; Jakobs, K; Mahboubi, K; Schäfer, U; Trefzger, T M; Eisenhandler, E F; Landon, M; Moyse, E; Thomas, J; Apostoglou, P; Barnett, B M; Brawn, I P; Davis, A O; Edwards, J; Gee, C N P; Gillman, A R; Perera, V J O; Qian, W; Bohm, C; Hellman, S; Hidvégi, A; Silverstein, S; RT 2003 13th IEEE-NPSS Real Time Conference

    2004-01-01

    The architecture of the ATLAS Level-1 Calorimeter Trigger system (L1Calo) is presented. Common approaches have been adopted for data distribution, result merging, readout, and slow control across the three different subsystems. A significant amount of common hardware is utilized, yielding substantial savings in cost, spares, and development effort. A custom, high-density backplane has been developed with data paths suitable for both the em/tt cluster processor (CP) and jet/energy-summation processor (JEP) subsystems. Common modules also provide interfaces to VME, CANbus and the LHC Timing, Trigger and Control system (TTC). A common data merger module (CMM) uses FPGAs with multiple configurations for summing electron/photon and tau/hadron cluster multiplicities, jet multiplicities, or total and missing transverse energy. The CMM performs both crate- and system-level merging. A common, FPGA-based readout driver (ROD) is used by all of the subsystems to send input, intermediate and output data to the data acquis...

  5. Architecture Level Safety Analyses for Safety-Critical Systems

    Directory of Open Access Journals (Sweden)

    K. S. Kushal

    2017-01-01

    Full Text Available The dependency of complex embedded Safety-Critical Systems across Avionics and Aerospace domains on their underlying software and hardware components has gradually increased with progression in time. Such application domain systems are developed based on a complex integrated architecture, which is modular in nature. Engineering practices assured with system safety standards to manage the failure, faulty, and unsafe operational conditions are very much necessary. System safety analyses involve the analysis of complex software architecture of the system, a major aspect in leading to fatal consequences in the behaviour of Safety-Critical Systems, and provide high reliability and dependability factors during their development. In this paper, we propose an architecture fault modeling and the safety analyses approach that will aid in identifying and eliminating the design flaws. The formal foundations of SAE Architecture Analysis & Design Language (AADL augmented with the Error Model Annex (EMV are discussed. The fault propagation, failure behaviour, and the composite behaviour of the design flaws/failures are considered for architecture safety analysis. The illustration of the proposed approach is validated by implementing the Speed Control Unit of Power-Boat Autopilot (PBA system. The Error Model Annex (EMV is guided with the pattern of consideration and inclusion of probable failure scenarios and propagation of fault conditions in the Speed Control Unit of Power-Boat Autopilot (PBA. This helps in validating the system architecture with the detection of the error event in the model and its impact in the operational environment. This also provides an insight of the certification impact that these exceptional conditions pose at various criticality levels and design assurance levels and its implications in verifying and validating the designs.

  6. Midcentury Modern High Schools: Rebooting the Architecture

    Science.gov (United States)

    Havens, Kevin

    2010-01-01

    A high school is more than a building; it's a repository of memories for many community members. High schools built at the turn of the century are not only cultural and civic landmarks, they are also often architectural treasures. When these facilities become outdated, a renovation that preserves the building's aesthetics and character is usually…

  7. A high performance architecture for accelerator controls

    International Nuclear Information System (INIS)

    Allen, M.; Hunt, S.M; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-01-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of < 100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipment: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost

  8. A high performance architecture for accelerator controls

    International Nuclear Information System (INIS)

    Allen, M.; Hunt, S.M.; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-03-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of <100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipments: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost. 1 fig

  9. Architecture Of High Speed Image Processing System

    Science.gov (United States)

    Konishi, Toshio; Hayashi, Hiroshi; Ohki, Tohru

    1988-01-01

    One of architectures for a high speed image processing system which corresponds to a new algorithm for a shape understanding is proposed. And the hardware system which is based on the archtecture was developed. Consideration points of the architecture are mainly that using processors should match with the processing sequence of the target image and that the developed system should be used practically in an industry. As the result, it was possible to perform each processing at a speed of 80 nano-seconds a pixel.

  10. Genetic architecture of circulating lipid levels

    DEFF Research Database (Denmark)

    Demirkan, Ayşe; Amin, Najaf; Isaacs, Aaron

    2011-01-01

    Serum concentrations of low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), triglycerides (TGs) and total cholesterol (TC) are important heritable risk factors for cardiovascular disease. Although genome-wide association studies (GWASs) of circulating lipid...... the ENGAGE Consortium GWAS on serum lipids, were applied to predict lipid levels in an independent population-based study, the Rotterdam Study-II (RS-II). We additionally tested for evidence of a shared genetic basis for different lipid phenotypes. Finally, the polygenic score approach was used to identify...... an alternative genome-wide significance threshold before pathway analysis and those results were compared with those based on the classical genome-wide significance threshold. Our study provides evidence suggesting that many loci influencing circulating lipid levels remain undiscovered. Cross-prediction models...

  11. High Data Rate Architecture (HiDRA)

    Science.gov (United States)

    Hylton, Alan; Raible, Daniel

    2016-01-01

    One of the greatest challenges in developing new space technology is in navigating the transition from ground based laboratory demonstration at Technology Readiness Level 6 (TRL-6) to conducting a prototype demonstration in space (TRL-7). This challenge is com- pounded by the relatively low availability of new spacecraft missions when compared with aeronautical craft to bridge this gap, leading to the general adoption of a low-risk stance by mission management to accept new, unproven technologies into the system. Also in consideration of risk, the limited selection and availability of proven space-grade components imparts a severe limitation on achieving high performance systems by current terrestrial technology standards. Finally from a space communications point of view the long duration characteristic of most missions imparts a major constraint on the entire space and ground network architecture, since any new technologies introduced into the system would have to be compliant with the duration of the currently deployed operational technologies, and in some cases may be limited by surrounding legacy capabilities. Beyond ensuring that the new technology is verified to function correctly and validated to meet the needs of the end users the formidable challenge then grows to additionally include: carefully timing the maturity path of the new technology to coincide with a feasible and accepting future mission so it flies before its relevancy has passed, utilizing a limited catalog of available components to their maximum potential to create meaningful and unprecedented new capabilities, designing and ensuring interoperability with aging space and ground infrastructures while simultaneously providing a growth path to the future. The International Space Station (ISS) is approaching 20 years of age. To keep the ISS relevant, technology upgrades are continuously taking place. Regarding communications, the state-of-the-art communication system upgrades underway include

  12. High Quality Virtual Reality for Architectural Exhibitions

    DEFF Research Database (Denmark)

    Kreutzberg, Anette

    2016-01-01

    This paper will summarise the findings from creating and implementing a visually high quality Virtual Reality (VR) experiment as part of an international architecture exhibition. It was the aim to represent the architectural spatial qualities as well as the atmosphere created from combining natural...... and artificial lighting in a prominent not yet built project. The outcome is twofold: Findings concerning the integration of VR in an exhibition space and findings concerning the experience of the virtual space itself. In the exhibition, an important aspect was the unmanned exhibition space, requiring the VR...... experience to be self-explanatory. Observations of different visitor reactions to the unmanned VR experience compared with visitor reactions at guided tours with personal instructions are evaluated. Data on perception of realism, spatial quality and light in the VR model were collected with qualitative...

  13. Component-Level Electronic-Assembly Repair (CLEAR) System Architecture

    Science.gov (United States)

    Oeftering, Richard C.; Bradish, Martin A.; Juergens, Jeffrey R.; Lewis, Michael J.; Vrnak, Daniel R.

    2011-01-01

    This document captures the system architecture for a Component-Level Electronic-Assembly Repair (CLEAR) capability needed for electronics maintenance and repair of the Constellation Program (CxP). CLEAR is intended to improve flight system supportability and reduce the mass of spares required to maintain the electronics of human rated spacecraft on long duration missions. By necessity it allows the crew to make repairs that would otherwise be performed by Earth based repair depots. Because of practical knowledge and skill limitations of small spaceflight crews they must be augmented by Earth based support crews and automated repair equipment. This system architecture covers the complete system from ground-user to flight hardware and flight crew and defines an Earth segment and a Space segment. The Earth Segment involves database management, operational planning, and remote equipment programming and validation processes. The Space Segment involves the automated diagnostic, test and repair equipment required for a complete repair process. This document defines three major subsystems including, tele-operations that links the flight hardware to ground support, highly reconfigurable diagnostics and test instruments, and a CLEAR Repair Apparatus that automates the physical repair process.

  14. Disruptive Logic Architectures and Technologies From Device to System Level

    CERN Document Server

    Gaillardon, Pierre-Emmanuel; Clermidy, Fabien

    2012-01-01

    This book discusses the opportunities offered by disruptive technologies to overcome the economical and physical limits currently faced by the electronics industry. It provides a new methodology for the fast evaluation of an emerging technology from an architectural perspective and discusses the implications from simple circuits to complex architectures. Several technologies are discussed, ranging from 3-D integration of devices (Phase Change Memories, Monolithic 3-D, Vertical NanoWires-based transistors) to dense 2-D arrangements (Double-Gate Carbon Nanotubes, Sublithographic Nanowires, Lithographic Crossbar arrangements). Novel architectural organizations, as well as the associated tools, are presented in order to explore this freshly opened design space. Describes a novel architectural organization for future reconfigurable systems; Includes a complete benchmarking toolflow for emerging technologies; Generalizes the description of reconfigurable circuits in terms of hierarchical levels; Assesses disruptive...

  15. High dynamic range imaging sensors and architectures

    CERN Document Server

    Darmont, Arnaud

    2013-01-01

    Illumination is a crucial element in many applications, matching the luminance of the scene with the operational range of a camera. When luminance cannot be adequately controlled, a high dynamic range (HDR) imaging system may be necessary. These systems are being increasingly used in automotive on-board systems, road traffic monitoring, and other industrial, security, and military applications. This book provides readers with an intermediate discussion of HDR image sensors and techniques for industrial and non-industrial applications. It describes various sensor and pixel architectures capable

  16. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  17. Experimental high energy physics and modern computer architectures

    International Nuclear Information System (INIS)

    Hoek, J.

    1988-06-01

    The paper examines how experimental High Energy Physics can use modern computer architectures efficiently. In this connection parallel and vector architectures are investigated, and the types available at the moment for general use are discussed. A separate section briefly describes some architectures that are either a combination of both, or exemplify other architectures. In an appendix some directions in which computing seems to be developing in the USA are mentioned. (author)

  18. From Smart-Eco Building to High-Performance Architecture: Optimization of Energy Consumption in Architecture of Developing Countries

    Science.gov (United States)

    Mahdavinejad, M.; Bitaab, N.

    2017-08-01

    Search for high-performance architecture and dreams of future architecture resulted in attempts towards meeting energy efficient architecture and planning in different aspects. Recent trends as a mean to meet future legacy in architecture are based on the idea of innovative technologies for resource efficient buildings, performative design, bio-inspired technologies etc. while there are meaningful differences between architecture of developed and developing countries. Significance of issue might be understood when the emerging cities are found interested in Dubaization and other related booming development doctrines. This paper is to analyze the level of developing countries’ success to achieve smart-eco buildings’ goals and objectives. Emerging cities of West of Asia are selected as case studies of the paper. The results of the paper show that the concept of high-performance architecture and smart-eco buildings are different in developing countries in comparison with developed countries. The paper is to mention five essential issues in order to improve future architecture of developing countries: 1- Integrated Strategies for Energy Efficiency, 2- Contextual Solutions, 3- Embedded and Initial Energy Assessment, 4- Staff and Occupancy Wellbeing, 5- Life-Cycle Monitoring.

  19. Electrical system architecture having high voltage bus

    Science.gov (United States)

    Hoff, Brian Douglas [East Peoria, IL; Akasam, Sivaprasad [Peoria, IL

    2011-03-22

    An electrical system architecture is disclosed. The architecture has a power source configured to generate a first power, and a first bus configured to receive the first power from the power source. The architecture also has a converter configured to receive the first power from the first bus and convert the first power to a second power, wherein a voltage of the second power is greater than a voltage of the first power, and a second bus configured to receive the second power from the converter. The architecture further has a power storage device configured to receive the second power from the second bus and deliver the second power to the second bus, a propulsion motor configured to receive the second power from the second bus, and an accessory motor configured to receive the second power from the second bus.

  20. Human Value And Soft Skill In Diploma Level Architectural Education

    Directory of Open Access Journals (Sweden)

    Dr. Sarita Dash

    2017-09-01

    Full Text Available In todays economic scenario the rising incomes and expectations in the wake of rapid urbanization has created a crying need for creation of value concept in the appropriate climate which will encourage emergence of good human-beings a band of worthy as well as socially responsible professionals and will eventually lead to the creation of a good society. So this paper has been designed to look at the present status of Architectural Education at Diploma level in a dynamic society. To meet the demands of the changing needs of the changing society the future architectural education should address some pertinent issues regarding soft skills which has been discussed in this paper. A little measure has been taken to explain that the innovations and practices in architectural education will impose new demands on the teachers who are mainly responsible for the rectification of the foundation at root level to cultivate the human values as a part of their teachings. The paper has also talked about the outcome of evaluation that necessitates the change in education to express the qualitative significance to human consciousness.

  1. High-Efficient Parallel CAVLC Encoders on Heterogeneous Multicore Architectures

    Directory of Open Access Journals (Sweden)

    H. Y. Su

    2012-04-01

    Full Text Available This article presents two high-efficient parallel realizations of the context-based adaptive variable length coding (CAVLC based on heterogeneous multicore processors. By optimizing the architecture of the CAVLC encoder, three kinds of dependences are eliminated or weaken, including the context-based data dependence, the memory accessing dependence and the control dependence. The CAVLC pipeline is divided into three stages: two scans, coding, and lag packing, and be implemented on two typical heterogeneous multicore architectures. One is a block-based SIMD parallel CAVLC encoder on multicore stream processor STORM. The other is a component-oriented SIMT parallel encoder on massively parallel architecture GPU. Both of them exploited rich data-level parallelism. Experiments results show that compared with the CPU version, more than 70 times of speedup can be obtained for STORM and over 50 times for GPU. The implementation of encoder on STORM can make a real-time processing for 1080p @30fps and GPU-based version can satisfy the requirements for 720p real-time encoding. The throughput of the presented CAVLC encoders is more than 10 times higher than that of published software encoders on DSP and multicore platforms.

  2. High Efficiency EBCOT with Parallel Coding Architecture for JPEG2000

    Directory of Open Access Journals (Sweden)

    Chiang Jen-Shiun

    2006-01-01

    Full Text Available This work presents a parallel context-modeling coding architecture and a matching arithmetic coder (MQ-coder for the embedded block coding (EBCOT unit of the JPEG2000 encoder. Tier-1 of the EBCOT consumes most of the computation time in a JPEG2000 encoding system. The proposed parallel architecture can increase the throughput rate of the context modeling. To match the high throughput rate of the parallel context-modeling architecture, an efficient pipelined architecture for context-based adaptive arithmetic encoder is proposed. This encoder of JPEG2000 can work at 180 MHz to encode one symbol each cycle. Compared with the previous context-modeling architectures, our parallel architectures can improve the throughput rate up to 25%.

  3. High Level Architecture Runtime Infrastructure Test Report

    National Research Council Canada - National Science Library

    Wright, Darrell

    1998-01-01

    Joint Advanced Distributed Simulation Joint Test and Evaluation is an Office of the Secretary of Defense-sponsored joint test force chartered to determine the utility of advanced distributed simulation (ADS...

  4. High-voltage, high-power architecture considerations

    International Nuclear Information System (INIS)

    Moser, R.L.

    1985-01-01

    Three basic EPS architectures, direct energy transfer, peak-power tracking, and a potential EPS architecture for a nuclear reactor are described and compared. Considerations for the power source and energy storage are discussed. Factors to be considered in selecting the operating voltage are pointed out. Other EPS architecture considerations are autonomy, solar array degrees of freedom, and EPS modularity. It was concluded that selection of the power source and energy storage has major impacts on the spacecraft architecture and mass

  5. High-level verification

    CERN Document Server

    Lerner, Sorin; Kundu, Sudipta

    2011-01-01

    Given the growing size and heterogeneity of Systems on Chip (SOC), the design process from initial specification to chip fabrication has become increasingly complex. This growing complexity provides incentive for designers to use high-level languages such as C, SystemC, and SystemVerilog for system-level design. While a major goal of these high-level languages is to enable verification at a higher level of abstraction, allowing early exploration of system-level designs, the focus so far for validation purposes has been on traditional testing techniques such as random testing and scenario-based

  6. Instrumentation Standard Architectures for Future High Availability Control Systems

    International Nuclear Information System (INIS)

    Larsen, R.S.

    2005-01-01

    Architectures for next-generation modular instrumentation standards should aim to meet a requirement of High Availability, or robustness against system failure. This is particularly important for experiments both large and small mounted on production accelerators and light sources. New standards should be based on architectures that (1) are modular in both hardware and software for ease in repair and upgrade; (2) include inherent redundancy at internal module, module assembly and system levels; (3) include modern high speed serial inter-module communications with robust noise-immune protocols; and (4) include highly intelligent diagnostics and board-management subsystems that can predict impending failure and invoke evasive strategies. The simple design principles lead to fail-soft systems that can be applied to any type of electronics system, from modular instruments to large power supplies to pulsed power modulators to entire accelerator systems. The existing standards in use are briefly reviewed and compared against a new commercial standard which suggests a powerful model for future laboratory standard developments. The past successes of undertaking such projects through inter-laboratory engineering-physics collaborations will be briefly summarized

  7. All passive architecture for high efficiency cascaded Raman conversion

    Science.gov (United States)

    Balaswamy, V.; Arun, S.; Chayran, G.; Supradeepa, V. R.

    2018-02-01

    Cascaded Raman fiber lasers have offered a convenient method to obtain scalable, high-power sources at various wavelength regions inaccessible with rare-earth doped fiber lasers. A limitation previously was the reduced efficiency of these lasers. Recently, new architectures have been proposed to enhance efficiency, but this came at the cost of enhanced complexity, requiring an additional low-power, cascaded Raman laser. In this work, we overcome this with a new, all-passive architecture for high-efficiency cascaded Raman conversion. We demonstrate our architecture with a fifth-order cascaded Raman converter from 1117nm to 1480nm with output power of ~64W and efficiency of 60%.

  8. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  9. Architecture

    OpenAIRE

    Clear, Nic

    2014-01-01

    When discussing science fiction’s relationship with architecture, the usual practice is to look at the architecture “in” science fiction—in particular, the architecture in SF films (see Kuhn 75-143) since the spaces of literary SF present obvious difficulties as they have to be imagined. In this essay, that relationship will be reversed: I will instead discuss science fiction “in” architecture, mapping out a number of architectural movements and projects that can be viewed explicitly as scien...

  10. An Architecture Offering Mobile Pollution Sensing with High Spatial Resolution

    Directory of Open Access Journals (Sweden)

    Oscar Alvear

    2016-01-01

    Full Text Available Mobile sensing is becoming the best option to monitor our environment due to its ease of use, high flexibility, and low price. In this paper, we present a mobile sensing architecture able to monitor different pollutants using low-end sensors. Although the proposed solution can be deployed everywhere, it becomes especially meaningful in crowded cities where pollution values are often high, being of great concern to both population and authorities. Our architecture is composed of three different modules: a mobile sensor for monitoring environment pollutants, an Android-based device for transferring the gathered data to a central server, and a central processing server for analyzing the pollution distribution. Moreover, we analyze different issues related to the monitoring process: (i filtering captured data to reduce the variability of consecutive measurements; (ii converting the sensor output to actual pollution levels; (iii reducing the temporal variations produced by mobile sensing process; and (iv applying interpolation techniques for creating detailed pollution maps. In addition, we study the best strategy to use mobile sensors by first determining the influence of sensor orientation on the captured values and then analyzing the influence of time and space sampling in the interpolation process.

  11. High potassium level

    Science.gov (United States)

    ... level is very high, or if you have danger signs, such as changes in an ECG . Emergency ... Seifter JL. Potassium disorders. In: Goldman L, Schafer AI, eds. Goldman-Cecil Medicine . 25th ed. Philadelphia, PA: ...

  12. Factoring symmetric indefinite matrices on high-performance architectures

    Science.gov (United States)

    Jones, Mark T.; Patrick, Merrell L.

    1990-01-01

    The Bunch-Kaufman algorithm is the method of choice for factoring symmetric indefinite matrices in many applications. However, the Bunch-Kaufman algorithm does not take advantage of high-performance architectures such as the Cray Y-MP. Three new algorithms, based on Bunch-Kaufman factorization, that take advantage of such architectures are described. Results from an implementation of the third algorithm are presented.

  13. Evaluation of noise level in architecture department building in University of Sumatera Utara

    Science.gov (United States)

    Amran, Novrial; Damanik, Novita Hillary Christy

    2018-03-01

    Noise is one the comfort factors that need to be noticed, particularly in an educational environment. Hearing a high noise in a period can affect students’ learning performance. The aims of this study were to know the noise level and get an appropriate design to reduce noise in Architecture Department building in the University of Sumatera Utara, considering that architecture students often spend most of their time inside the room. The measurement was conducted in four rooms for two days each from 09:00 – 12:00 and from 13:00 – 16:00 by using Sound Level Meter that placed near the noise source of the room. The result indicated that the average of noise level exceeded the 55 dB(A) so it still needs the appropriate design to reduce the noise that occurs in the building.

  14. Instrumentation of a Level-1 Track Trigger at ATLAS with Double Buffer Front-End Architecture

    CERN Document Server

    Cooper, B; The ATLAS collaboration

    2012-01-01

    Around 2021 the Large Hadron Collider will be upgraded to provide instantaneous luminosities 5x10^34, leading to excessive rates from the ATLAS Level-1 trigger. We describe a double-buffer front-end architecture for the ATLAS tracker replacement which should enable tracking information to be used in the Level-1 decision. This will allow Level-1 rates to be controlled whilst preserving high efficiency for single lepton triggers at relatively low transverse momentum thresholds pT ~25 GeV, enabling ATLAS to remain sensitive to physics at the electroweak scale. In particular, a potential hardware solution for the communication between the upgraded silicon barrel strip detectors and the external processing within this architecture will be described, and discrete event simulations used to demonstrate that this fits within the tight latency constraints.

  15. Alternatives generation and analysis report for immobilized low-level waste interim storage architecture

    Energy Technology Data Exchange (ETDEWEB)

    Burbank, D.A., Westinghouse Hanford

    1996-09-01

    The Immobilized Low-Level Waste Interim Storage subproject will provide storage capacity for immobilized low-level waste product sold to the U.S. Department of Energy by the privatization contractor. This report describes alternative Immobilized Low-Level Waste storage system architectures, evaluation criteria, and evaluation results to support the Immobilized Low-Level Waste storage system architecture selection decision process.

  16. Aquaponic Growbed Water Level Control Using Fog Architecture

    Science.gov (United States)

    Asmi Romli, Muhamad; Daud, Shuhaizar; Raof, Rafikha Aliana A.; Awang Ahmad, Zahari; Mahrom, Norfadilla

    2018-05-01

    Integrated Multi-Trophic Aquaculture (IMTA) is an advance method of aquaculture which combines species with different nutritional needs to live together. The combination between aquatic live and crops is called aquaponics. Aquatic waste that normally removed by biofilters in normal aquaculture practice will be absorbed by crops in this practice. Aquaponics have few common components and growbed provide the best filtration function. In growbed a siphon act as mechanical structure to control water fill and flush process. Water to the growbed comes from fish tank with multiple flow speeds based on the pump specification and height. Too low speed and too fast flow rate can result in siphon malfunctionality. Pumps with variable speed do exist but it is costly. Majority of the aquaponic practitioner use single speed pump and try to match the pump speed with siphon operational requirement. In order to remove the matching requirement some control need to be introduced. Preliminarily this research will show the concept of fill-and-flush for multiple pumping speeds. The final aim of this paper is to show how water level management can be done to remove the speed dependency. The siphon tried to be controlled remotely since wireless data transmission quite practical in vast operational area. Fog architecture will be used in order to transmit sensor data and control command. This paper able to show the water able to be retented in the growbed within suggested duration by stopping the flow in once predefined level.

  17. UTBB FDSOI suitability for IoT applications: Investigations at device, design and architectural levels

    Science.gov (United States)

    Berthier, Florent; Beigne, Edith; Heitzmann, Frédéric; Debicki, Olivier; Christmann, Jean-Frédéric; Valentian, Alexandre; Billoint, Olivier; Amat, Esteve; Morche, Dominique; Chairat, Soundous; Sentieys, Olivier

    2016-11-01

    In this paper, we propose to analyze Ultra Thin Body and Box FDSOI technology suitability and architectural solutions for IoT applications and more specifically for autonomous Wireless Sensor Nodes (WSNs). As IoT applications are extremely diversified there is a strong need for flexible solutions at design, architectural level but also at technological level. Moreover, as most of those systems are recovering their energy from the environment, they are challenged by low voltage supplies and low leakage functionalities. We detail in this paper some Ultra Thin Body and Box FDSOI 28 nm characteristics and results demonstrating that this technology could be a perfect option for multidisciplinary IoT devices. Back biasing capabilities and low voltage features are investigated demonstrating efficient high speed/low leakage flexibility. In addition, architectural solutions for WSNs microcontroller are also proposed taking advantage of Ultra Thin Body and Box FDSOI characteristics for full user applicative flexibility. A partitioned architecture between an Always Responsive part with an asynchronous Wake Up Controller (WUC) managing WSN current tasks and an On Demand part with a main processor for application maintenance is presented. First results of the Always Responsive part implemented in Ultra Thin Body and Box FDSOI 28 nm are also exposed.

  18. General Algorithm (High level)

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...

  19. Innovative on board payload optical architecture for high throughput satellites

    Science.gov (United States)

    Baudet, D.; Braux, B.; Prieur, O.; Hughes, R.; Wilkinson, M.; Latunde-Dada, K.; Jahns, J.; Lohmann, U.; Fey, D.; Karafolas, N.

    2017-11-01

    For the next generation of HighThroughPut (HTP) Telecommunications Satellites, space end users' needs will result in higher link speeds and an increase in the number of channels; up to 512 channels running at 10Gbits/s. By keeping electrical interconnections based on copper, the constraints in term of power dissipation, number of electrical wires and signal integrity will become too demanding. The replacement of the electrical links by optical links is the most adapted solution as it provides high speed links with low power consumption and no EMC/EMI. But replacing all electrical links by optical links of an On Board Payload (OBP) is challenging. It is not simply a matter of replacing electrical components with optical but rather the whole concept and architecture have to be rethought to achieve a high reliability and high performance optical solution. In this context, this paper will present the concept of an Innovative OBP Optical Architecture. The optical architecture was defined to meet the critical requirements of the application: signal speed, number of channels, space reliability, power dissipation, optical signals crossing and components availability. The resulting architecture is challenging and the need for new developments is highlighted. But this innovative optically interconnected architecture will substantially outperform standard electrical ones.

  20. Long term evolution and internal architecture of high-energy banner ridges of Mer d'Iroise (Western Brittany, France) : interplay of sea-level, basement morphology, biogenic productivity and hydrodynamics

    Science.gov (United States)

    Le Roy, P., Sr.; Le Dantec, N.; Franzetti, M.; Delacourt, C.; Ehrhold, A.

    2016-12-01

    The recent completion of a coupled seismic and swath bathymetric survey, conducted across the Mer d'Iroise (Atlantic continental shelf, France), provided new data for the study of the long term evolution of deep tidal sand ridges. Three major banner sand ridges composed of biogenic sands were investigated: the Banc du Four, the Haut Fond d'Ouessant and the Banc d'Ar Men. Seismic interpretation reveals a compound internal architecture of these sand ridges with a sedimentary core forming the lower units interpreted to be shoreface deposits and overlain by sandwaves. Sandwave climbing, which combines progradation and accretion, is the major process controlling the growth of the ridges. The elevation of the preserved dune foresets reaches values of about 20 to 30 m and indicate a combination of giant dunes characterized by numerous steep (up to 20°) clinoforms corresponding to a high-energy depositional environment. All of the radiocarbon ages of the biogenic surficial deposits of the Banc du Four range from 10,036 to 2,748 cal years B.P. and suggest it has grown during the last sea-level rise. The apparent absence of recent surface deposits could be caused by a change in benthic biogenic productivity or the non-conservation of recent deposits. The multiphase accretion of the ridge is closely linked to the progressive flooding of the coastal promontories and straits that structured the igneous basement. A comparable evolutionary scheme is observed for the Haut-Fond d'Ouessant where a counter-clock wise migration of dunes characterizes the surface of the ridge. In contrast, the Banc d'Ar Men located above a regular basement displays a simpler structure with a consistent Northwestward migration of steep clinoforms. Therefore, the sand ridges of the Mer d'Iroise should be thought of as a representative example of large-scale high-energy banner banks controlled by interaction of sea-level, basement morphology, biogenic productivity, tidal and wave hydrodynamics.

  1. Development of economically viable, highly integrated, highly modular SEGIS architecture.

    Energy Technology Data Exchange (ETDEWEB)

    Enslin, Johan (Petra Solar, Inc., South Plainfield, NJ); Hamaoui, Ronald (Petra Solar, Inc., South Plainfield, NJ); Gonzalez, Sigifredo; Haddad, Ghaith (Petra Solar, Inc., South Plainfield, NJ); Rustom, Khalid (Petra Solar, Inc., South Plainfield, NJ); Stuby, Rick (Petra Solar, Inc., South Plainfield, NJ); Kuran, Mohammad (Petra Solar, Inc., South Plainfield, NJ); Mark, Evlyn (Petra Solar, Inc., South Plainfield, NJ); Amarin, Ruba (Petra Solar, Inc., South Plainfield, NJ); Alatrash, Hussam (Petra Solar, Inc., South Plainfield, NJ); Bower, Ward Isaac; Kuszmaul, Scott S.; Sena-Henderson, Lisa; David, Carolyn; Akhil, Abbas Ali

    2012-03-01

    Initiated in 2008, the SEGIS initiative is a partnership involving the U.S. DOE, Sandia National Laboratories, private sector companies, electric utilities, and universities. Projects supported under the initiative have focused on the complete-system development of solar technologies, with the dual goal of expanding renewable PV applications and addressing new challenges of connecting large-scale solar installations in higher penetrations to the electric grid. Petra Solar, Inc., a New Jersey-based company, received SEGIS funds to develop solutions to two of these key challenges: integrating increasing quantities of solar resources into the grid without compromising (and likely improving) power quality and reliability, and moving the design from a concept of intelligent system controls to successful commercialization. The resulting state-of-the art technology now includes a distributed photovoltaic (PV) architecture comprising AC modules that not only feed directly into the electrical grid at distribution levels but are equipped with new functions that improve voltage stability and thus enhance overall grid stability. This integrated PV system technology, known as SunWave, has applications for 'Power on a Pole,' and comes with a suite of technical capabilities, including advanced inverter and system controls, micro-inverters (capable of operating at both the 120V and 240V levels), communication system, network management system, and semiconductor integration. Collectively, these components are poised to reduce total system cost, increase the system's overall value and help mitigate the challenges of solar intermittency. Designed to be strategically located near point of load, the new SunWave technology is suitable for integration directly into the electrical grid but is also suitable for emerging microgrid applications. SunWave was showcased as part of a SEGIS Demonstration Conference at Pepco Holdings, Inc., on September 29, 2011, and is presently

  2. Architecture of high reliable control systems using complex software

    International Nuclear Information System (INIS)

    Tallec, M.

    1990-01-01

    The problems involved by the use of complex softwares in control systems that must insure a very high level of safety are examined. The first part makes a brief description of the prototype of PROSPER system. PROSPER means protection system for nuclear reactor with high performances. It has been installed on a French nuclear power plant at the beginnning of 1987 and has been continually working since that time. This prototype is realized on a multi-processors system. The processors communicate between themselves using interruptions and protected shared memories. On each processor, one or more protection algorithms are implemented. Those algorithms use data coming directly from the plant and, eventually, data computed by the other protection algorithms. Each processor makes its own acquisitions from the process and sends warning messages if some operating anomaly is detected. All algorithms are activated concurrently on an asynchronous way. The results are presented and the safety related problems are detailed. - The second part is about measurements' validation. First, we describe how the sensors' measurements will be used in a protection system. Then, a proposal for a method based on the techniques of artificial intelligence (expert systems and neural networks) is presented. - The last part is about the problems of architectures of systems including hardware and software: the different types of redundancies used till now and a proposition of a multi-processors architecture which uses an operating system that is able to manage several tasks implemented on different processors, which verifies the good operating of each of those tasks and of the related processors and which allows to carry on the operation of the system, even in a degraded manner when a failure has been detected are detailed [fr

  3. High level nuclear wastes

    International Nuclear Information System (INIS)

    Lopez Perez, B.

    1987-01-01

    The transformations involved in the nuclear fuels during the burn-up at the power nuclear reactors for burn-up levels of 33.000 MWd/th are considered. Graphs and data on the radioactivity variation with the cooling time and heat power of the irradiated fuel are presented. Likewise, the cycle of the fuel in light water reactors is presented and the alternatives for the nuclear waste management are discussed. A brief description of the management of the spent fuel as a high level nuclear waste is shown, explaining the reprocessing and giving data about the fission products and their radioactivities, which must be considered on the vitrification processes. On the final storage of the nuclear waste into depth geological burials, both alternatives are coincident. The countries supporting the reprocessing are indicated and the Spanish programm defined in the Plan Energetico Nacional (PEN) is shortly reviewed. (author) 8 figs., 4 tabs

  4. ALICE High Level Trigger

    CERN Multimedia

    Alt, T

    2013-01-01

    The ALICE High Level Trigger (HLT) is a computing farm designed and build for the real-time, online processing of the raw data produced by the ALICE detectors. Events are fully reconstructed from the raw data, analyzed and compressed. The analysis summary together with the compressed data and a trigger decision is sent to the DAQ. In addition the reconstruction of the events allows for on-line monitoring of physical observables and this information is provided to the Data Quality Monitor (DQM). The HLT can process event rates of up to 2 kHz for proton-proton and 200 Hz for Pb-Pb central collisions.

  5. Central system of Interlock of ITER, high integrity architecture

    International Nuclear Information System (INIS)

    Prieto, I.; Martinez, G.; Lopez, C.

    2014-01-01

    The CIS (Central Interlock System), along with the CODAC system and CSS (Central Safety System), form the central I and C systems of ITER. The CIS is responsible for implementing the core functions of protection (Central Interlock Functions) through different systems of plant (Plant Systems) within the overall strategy of investment protection for ITER. IBERDROLA supports engineering to define and develop the control architecture of CIS according to the stringent requirements of integrity, availability and response time. For functions with response times of the order of half a second is selected PLC High availability of industrial range. However, due to the nature of the machine itself, certain functions must be able to act under the millisecond, so it has had to develop a solution based on FPGA (Field Programmable Gate Array) capable of meeting the requirements architecture. In this article CIS architecture is described, as well as the process for the development and validation of the selected platforms. (Author)

  6. Reprogrammable Controller Design From High-Level Specification

    Directory of Open Access Journals (Sweden)

    M. Benmohammed

    2003-10-01

    Full Text Available Existing techniques in high-level synthesis mostly assume a simple controller architecture model in the form of a single FSM. However, in reality more complex controller architectures are often used. On the other hand, in the case of programmable processors, the controller architecture is largely defined by the available control-flow instructions in the instruction set. With the wider acceptance of behavioral synthesis, the application of these methods for the design of programmable controllers is of fundamental importance in embedded system technology. This paper describes an important extension of an existing architectural synthesis system targeting the generation of ASIP reprogrammable architectures. The designer can then generate both style of architecture, hardwired and programmable, using the same synthesis system and can quickly evaluate the trade-offs of hardware decisions.

  7. Scalable Intersample Interpolation Architecture for High-channel-count Beamformers

    DEFF Research Database (Denmark)

    Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt

    2011-01-01

    Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....

  8. Communication and Memory Architecture Design of Application-Specific High-End Multiprocessors

    Directory of Open Access Journals (Sweden)

    Yahya Jan

    2012-01-01

    Full Text Available This paper is devoted to the design of communication and memory architectures of massively parallel hardware multiprocessors necessary for the implementation of highly demanding applications. We demonstrated that for the massively parallel hardware multiprocessors the traditionally used flat communication architectures and multi-port memories do not scale well, and the memory and communication network influence on both the throughput and circuit area dominates the processors influence. To resolve the problems and ensure scalability, we proposed to design highly optimized application-specific hierarchical and/or partitioned communication and memory architectures through exploring and exploiting the regularity and hierarchy of the actual data flows of a given application. Furthermore, we proposed some data distribution and related data mapping schemes in the shared (global partitioned memories with the aim to eliminate the memory access conflicts, as well as, to ensure that our communication design strategies will be applicable. We incorporated these architecture synthesis strategies into our quality-driven model-based multi-processor design method and related automated architecture exploration framework. Using this framework, we performed a large series of experiments that demonstrate many various important features of the synthesized memory and communication architectures. They also demonstrate that our method and related framework are able to efficiently synthesize well scalable memory and communication architectures even for the high-end multiprocessors. The gains as high as 12-times in performance and 25-times in area can be obtained when using the hierarchical communication networks instead of the flat networks. However, for the high parallelism levels only the partitioned approach ensures the scalability in performance.

  9. A meta-level architecture for strategic reasoning in naval planning (Extended abstract)

    NARCIS (Netherlands)

    Hoogendoorn, M.; Jonker, C.M.; van Maanen, P.P.; Treur, J.

    2005-01-01

    The management of naval organizations aims at the maximization of mission success by means of monitoring, planning, and strategic reasoning. This paper presents a meta-level architecture for strategic reasoning in naval planning. The architecture is instantiated with decision knowledge acquired from

  10. High-performance, scalable optical network-on-chip architectures

    Science.gov (United States)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  11. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  12. The architecture of the CMS Level-1 Trigger Control and Monitoring System using UML

    International Nuclear Information System (INIS)

    Magrans de Abril, Marc; Ghabrous Larrea, Carlos; Lazaridis, Christos; Da Rocha Melo, Jose L; Hammer, Josef; Hartl, Christian

    2011-01-01

    The architecture of the Compact Muon Solenoid (CMS) Level-1 Trigger Control and Monitoring software system is presented. This system has been installed and commissioned on the trigger online computers and is currently used for data taking. It has been designed to handle the trigger configuration and monitoring during data taking as well as all communications with the main run control of CMS. Furthermore its design has foreseen the provision of the software infrastructure for detailed testing of the trigger system during beam down time. This is a medium-size distributed system that runs over 40 PCs and 200 processes that control about 4000 electronic boards. The architecture of this system is described using the industry-standard Universal Modeling Language (UML). This way the relationships between the different subcomponents of the system become clear and all software upgrades and modifications are simplified. The described architecture has allowed for frequent upgrades that were necessary during the commissioning phase of CMS when the trigger system evolved constantly. As a secondary objective, the paper provides a UML usage example and tries to encourage the standardization of the software documentation of large projects across the LHC and High Energy Physics community.

  13. The architecture of the CMS Level-1 Trigger Control and Monitoring System using UML

    Science.gov (United States)

    Magrans de Abril, Marc; Da Rocha Melo, Jose L.; Ghabrous Larrea, Carlos; Hammer, Josef; Hartl, Christian; Lazaridis, Christos

    2011-12-01

    The architecture of the Compact Muon Solenoid (CMS) Level-1 Trigger Control and Monitoring software system is presented. This system has been installed and commissioned on the trigger online computers and is currently used for data taking. It has been designed to handle the trigger configuration and monitoring during data taking as well as all communications with the main run control of CMS. Furthermore its design has foreseen the provision of the software infrastructure for detailed testing of the trigger system during beam down time. This is a medium-size distributed system that runs over 40 PCs and 200 processes that control about 4000 electronic boards. The architecture of this system is described using the industry-standard Universal Modeling Language (UML). This way the relationships between the different subcomponents of the system become clear and all software upgrades and modifications are simplified. The described architecture has allowed for frequent upgrades that were necessary during the commissioning phase of CMS when the trigger system evolved constantly. As a secondary objective, the paper provides a UML usage example and tries to encourage the standardization of the software documentation of large projects across the LHC and High Energy Physics community.

  14. Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic

    Science.gov (United States)

    Narendran, S.; Selvakumar, J.

    2018-04-01

    Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.

  15. Advanced laser architectures for high power eyesafe illuminators

    Science.gov (United States)

    Baranova, N.; Pati, B.; Stebbins, K.; Bystryak, I.; Rayno, M.; Ezzo, K.; DePriest, C.

    2018-02-01

    Q-Peak has demonstrated a novel pulsed eyesafe laser architecture operating with >50 mJ pulse energies at Pulse Repetition Frequencies (PRFs) as high as 320 Hz. The design leverages an Optical Parametric Oscillator (OPO) and Optical Parametric Amplifier (OPA) geometry, which provides the unique capability for high power in a comparatively compact package, while also offering the potential for additional eyesafe power scaling. The laser consists of a Commercial Off-the-Shelf (COTS) Q-switched front-end seed laser to produce pulse-widths around 10 ns at 1.06-μm, which is then followed by a pair of Multi-Pass Amplifier (MPA) architectures (comprised of side-pumped, multi-pass Nd:YAG slabs with a compact diode-pump-array imaging system), and finally involving two sequential nonlinear optical conversion architectures for transfer into the eyesafe regime. The initial seed beam is first amplified through the MPA, and then split into parallel optical paths. An OPO provides effective nonlinear conversion on one optical path, while a second MPA further amplifies the 1.06-μm beam for use in pumping an OPA on the second optical path. These paths are then recombined prior to seeding the OPA. Each nonlinear conversion subsystem utilizes Potassium Titanyl Arsenate (KTA) for effective nonlinear conversion with lower risk to optical damage. This laser architecture efficiently produces pulse energies of >50 mJ in the eyesafe band at PRFs as high as 320 Hz, and has been designed to fit within a volume of 4,500 in3 (0.074 m3 ). We will discuss theoretical and experimental details of the nonlinear optical system for achieving higher eyesafe powers.

  16. High blood cholesterol levels

    Science.gov (United States)

    Cholesterol - high; Lipid disorders; Hyperlipoproteinemia; Hyperlipidemia; Dyslipidemia; Hypercholesterolemia ... There are many types of cholesterol. The ones talked about most are: ... lipoprotein (HDL) cholesterol -- often called "good" cholesterol ...

  17. Progress in a novel architecture for high performance processing

    Science.gov (United States)

    Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin

    2018-04-01

    The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).

  18. ECOSUSTAINABLE HIGH-RISE : The Environmentally Conscious Architecture of Skyscraper

    Directory of Open Access Journals (Sweden)

    Jimmy Priatman

    2000-01-01

    Full Text Available The term " green architecture " is related to evolving architecture which is sensitive to the environment and emerges from the environmental awareness due to the effects of destruction of air, water, energy and earth. It is characterized by improving energy efficiency, sustainability concept and holistic approach of the entire building enterprise, where all of the environmental factors are regarded as an objective. Although there are many of environmentally conscious architectural works today, but most of the building designers prefer to deal primarily with small-scale buildings (low to medium rise and often only in greenfield, rural or suburban sites. All those large scale, high-rise or tall buildings located in dense urban areas are regarded as avoidable objects that consumes a lot of energy, uses huge amounts of materials, and produces massive volumes of waste discharge into the environment. These intensive buildings deserve greater attention and should be designed by greater part of our expertise and effort to ecologically design than the smaller buildings with fewer problems. The paper discusses "green" dimensions applied to tall buildings/high-rise buildings with their innovative approach that leads to ecosustainable tall buildings.

  19. Biosensor Architectures for High-Fidelity Reporting of Cellular Signaling

    Science.gov (United States)

    Dushek, Omer; Lellouch, Annemarie C.; Vaux, David J.; Shahrezaei, Vahid

    2014-01-01

    Understanding mechanisms of information processing in cellular signaling networks requires quantitative measurements of protein activities in living cells. Biosensors are molecular probes that have been developed to directly track the activity of specific signaling proteins and their use is revolutionizing our understanding of signal transduction. The use of biosensors relies on the assumption that their activity is linearly proportional to the activity of the signaling protein they have been engineered to track. We use mechanistic mathematical models of common biosensor architectures (single-chain FRET-based biosensors), which include both intramolecular and intermolecular reactions, to study the validity of the linearity assumption. As a result of the classic mechanism of zero-order ultrasensitivity, we find that biosensor activity can be highly nonlinear so that small changes in signaling protein activity can give rise to large changes in biosensor activity and vice versa. This nonlinearity is abolished in architectures that favor the formation of biosensor oligomers, but oligomeric biosensors produce complicated FRET states. Based on this finding, we show that high-fidelity reporting is possible when a single-chain intermolecular biosensor is used that cannot undergo intramolecular reactions and is restricted to forming dimers. We provide phase diagrams that compare various trade-offs, including observer effects, which further highlight the utility of biosensor architectures that favor intermolecular over intramolecular binding. We discuss challenges in calibrating and constructing biosensors and highlight the utility of mathematical models in designing novel probes for cellular signaling. PMID:25099816

  20. High Performance Systolic Array Core Architecture Design for DNA Sequencer

    Directory of Open Access Journals (Sweden)

    Saiful Nurdin Dayana

    2018-01-01

    Full Text Available This paper presents a high performance systolic array (SA core architecture design for Deoxyribonucleic Acid (DNA sequencer. The core implements the affine gap penalty score Smith-Waterman (SW algorithm. This time-consuming local alignment algorithm guarantees optimal alignment between DNA sequences, but it requires quadratic computation time when performed on standard desktop computers. The use of linear SA decreases the time complexity from quadratic to linear. In addition, with the exponential growth of DNA databases, the SA architecture is used to overcome the timing issue. In this work, the SW algorithm has been captured using Verilog Hardware Description Language (HDL and simulated using Xilinx ISIM simulator. The proposed design has been implemented in Xilinx Virtex -6 Field Programmable Gate Array (FPGA and improved in the core area by 90% reduction.

  1. High Dynamic Range Cognitive Radio Front Ends: Architecture to Evaluation

    Science.gov (United States)

    Ashok, Arun; Subbiah, Iyappan; Varga, Gabor; Schrey, Moritz; Heinen, Stefan

    2016-07-01

    Advent of TV white space digitization has released frequencies from 470 MHz to 790 MHz to be utilized opportunistically. The secondary user can utilize these so called TV spaces in the absence of primary users. The most important challenge for this coexistence is mutual interference. While the strong TV stations can completely saturate the receiver of the cognitive radio (CR), the cognitive radio spurious tones can disturb other primary users and white space devices. The aim of this paper is to address the challenges for enabling cognitive radio applications in WLAN and LTE. In this process, architectural considerations for the design of cognitive radio front ends are discussed. With high-IF converters, faster and flexible implementation of CR enabled WLAN and LTE are shown. The effectiveness of the architecture is shown by evaluating the CR front ends for compliance of standards namely 802.11b/g (WLAN) and 3GPP TS 36.101 (LTE).

  2. Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs

    Science.gov (United States)

    Dias, Tiago; Roma, Nuno; Sousa, Leonel

    2014-12-01

    A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.

  3. Embedded Real-Time Architecture for Level-Set-Based Active Contours

    Directory of Open Access Journals (Sweden)

    Dejnožková Eva

    2005-01-01

    Full Text Available Methods described by partial differential equations have gained a considerable interest because of undoubtful advantages such as an easy mathematical description of the underlying physics phenomena, subpixel precision, isotropy, or direct extension to higher dimensions. Though their implementation within the level set framework offers other interesting advantages, their vast industrial deployment on embedded systems is slowed down by their considerable computational effort. This paper exploits the high parallelization potential of the operators from the level set framework and proposes a scalable, asynchronous, multiprocessor platform suitable for system-on-chip solutions. We concentrate on obtaining real-time execution capabilities. The performance is evaluated on a continuous watershed and an object-tracking application based on a simple gradient-based attraction force driving the active countour. The proposed architecture can be realized on commercially available FPGAs. It is built around general-purpose processor cores, and can run code developed with usual tools.

  4. Highly Adjustable Systems: An Architecture for Future Space Observatories

    Science.gov (United States)

    Arenberg, Jonathan; Conti, Alberto; Redding, David; Lawrence, Charles R.; Hachkowski, Roman; Laskin, Robert; Steeves, John

    2017-06-01

    Mission costs for ground breaking space astronomical observatories are increasing to the point of unsustainability. We are investigating the use of adjustable or correctable systems as a means to reduce development and therefore mission costs. The poster introduces the promise and possibility of realizing a “net zero CTE” system for the general problem of observatory design and introduces the basic systems architecture we are considering. This poster concludes with an overview of our planned study and demonstrations for proving the value and worth of highly adjustable telescopes and systems ahead of the upcoming decadal survey.

  5. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  6. The high speed interconnect system architecture and operation

    Science.gov (United States)

    Anderson, Steven C.

    The design and operation of a fiber-optic high-speed interconnect system (HSIS) being developed to meet the requirements of future avionics and flight-control hardware with distributed-system architectures are discussed. The HSIS is intended for 100-Mb/s operation of a local-area network with up to 256 stations. It comprises a bus transmission system (passive star couplers and linear media linked by active elements) and network interface units (NIUs). Each NIU is designed to perform the physical, data link, network, and transport functions defined by the ISO OSI Basic Reference Model (1982 and 1983) and incorporates a fiber-optic transceiver, a high-speed protocol based on the SAE AE-9B linear token-passing data bus (1986), and a specialized application interface unit. The operating modes and capabilities of HSIS are described in detail and illustrated with diagrams.

  7. Building and measuring a high performance network architecture

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  8. Architecture for high performance stereoscopic game rendering on Android

    Science.gov (United States)

    Flack, Julien; Sanderson, Hugh; Shetty, Sampath

    2014-03-01

    Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.

  9. Efficiency of High Order Spectral Element Methods on Petascale Architectures

    KAUST Repository

    Hutchinson, Maxwell; Heinecke, Alexander; Pabst, Hans; Henry, Greg; Parsani, Matteo; Keyes, David E.

    2016-01-01

    High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.

  10. Efficiency of High Order Spectral Element Methods on Petascale Architectures

    KAUST Repository

    Hutchinson, Maxwell

    2016-06-14

    High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.

  11. Firewall Architectures for High-Speed Networks: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Errin W. Fulp

    2007-08-20

    Firewalls are a key component for securing networks that are vital to government agencies and private industry. They enforce a security policy by inspecting and filtering traffic arriving or departing from a secure network. While performing these critical security operations, firewalls must act transparent to legitimate users, with little or no effect on the perceived network performance (QoS). Packets must be inspected and compared against increasingly complex rule sets and tables, which is a time-consuming process. As a result, current firewall systems can introduce significant delays and are unable to maintain QoS guarantees. Furthermore, firewalls are susceptible to Denial of Service (DoS) attacks that merely overload/saturate the firewall with illegitimate traffic. Current firewall technology only offers a short-term solution that is not scalable; therefore, the \\textbf{objective of this DOE project was to develop new firewall optimization techniques and architectures} that meet these important challenges. Firewall optimization concerns decreasing the number of comparisons required per packet, which reduces processing time and delay. This is done by reorganizing policy rules via special sorting techniques that maintain the original policy integrity. This research is important since it applies to current and future firewall systems. Another method for increasing firewall performance is with new firewall designs. The architectures under investigation consist of multiple firewalls that collectively enforce a security policy. Our innovative distributed systems quickly divide traffic across different levels based on perceived threat, allowing traffic to be processed in parallel (beyond current firewall sandwich technology). Traffic deemed safe is transmitted to the secure network, while remaining traffic is forwarded to lower levels for further examination. The result of this divide-and-conquer strategy is lower delays for legitimate traffic, higher throughput

  12. Genetic architecture of vitamin B12 and folate levels uncovered applying deeply sequenced large datasets

    DEFF Research Database (Denmark)

    Grarup, Niels; Sulem, Patrick; Sandholt, Camilla H

    2013-01-01

    of the underlying biology of human traits and diseases. Here, we used a large Icelandic whole genome sequence dataset combined with Danish exome sequence data to gain insight into the genetic architecture of serum levels of vitamin B12 (B12) and folate. Up to 22.9 million sequence variants were analyzed in combined...... in serum B12 or folate levels do not modify the risk of developing these conditions. Yet, the study demonstrates the value of combining whole genome and exome sequencing approaches to ascertain the genetic and molecular architectures underlying quantitative trait associations....

  13. The CMS High Level Trigger System

    CERN Document Server

    Afaq, A; Bauer, G; Biery, K; Boyer, V; Branson, J; Brett, A; Cano, E; Carboni, A; Cheung, H; Ciganek, M; Cittolin, S; Dagenhart, W; Erhan, S; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutiérrez-Mlot, E; Gutleber, J; Jacobs, C; Kim, J C; Klute, M; Kowalkowski, J; Lipeles, E; Lopez-Perez, Juan Antonio; Maron, G; Meijers, F; Meschi, E; Moser, R; Murray, S; Oh, A; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Pollet, L; Rácz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Sexton-Kennedy, E; Sumorok, K; Suzuki, I; Tsirigkas, D; Varela, J

    2007-01-01

    The CMS Data Acquisition (DAQ) System relies on a purely software driven High Level Trigger (HLT) to reduce the full Level-1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the Filter Farm and the procedures to validate the filtering code within the DAQ environment are described.

  14. Contemporary moment of residential architecture at the global level: HOUSING 15

    Directory of Open Access Journals (Sweden)

    Petrović Vladana

    2017-01-01

    Full Text Available 'That architectonic exhibitions are an indispensable and significant part of the history of architecture has been proven by numerous exhibitions dating back from the first decades of the 20th century, the Paris exhibitions (Salon d'Automne, where three manifest exhibition designs by Le Corbusier were presented, promoting a new system of values of the forthcoming modernist movement, then the Berlin exhibitions in the second half of the 20th century (Interbau 1957, IBA 1987 where the Postmodern was promoted, up to the second decade of the 21st century and the Biennial in Venice (La Biennale di Venezia, 2014, whose uniting topic was One Hundred Years of ,Modernity' (prof arch Darko Marušić, quote from the catalogue of the HOUSING 15. HOUSING 15 is an exhibition that was created on the initiative of the Department of Residential Building, Faculty of Civil Engineering and Architecture, University of Nis, in order to present the modern housing architecture at the global level. The exhibition was shown at the BINA 2016 and was followed by a round table discussion upon the topic Contemporary moment of residential architecture at the global level. The idea of the round table was to compare domestic and international experience in this field and draw attention toward the attitude on the present, electronic time considering the development of the residential architecture. The specificity of this exhibition, compared to the other events of a similar nature, is that in addition to architectural design the scientific expert reviews for the selected works are also presented, given by the international scientific and artistic committee of the exhibition. The paper is the summary of the discussion held at the round table, and it presents the potential problems, answers and conclusions relating to residential architecture today from the professional perspective.

  15. Implementations of a four-level mechanical architecture for fault-tolerant robots

    International Nuclear Information System (INIS)

    Hooper, Richard; Sreevijayan, Dev; Tesar, Delbert; Geisinger, Joseph; Kapoor, Chelan

    1996-01-01

    This paper describes a fault tolerant mechanical architecture with four levels devised and implemented in concert with NASA (Tesar, D. and Sreevijayan, D., Four-level fault tolerance in manipulator design for space operations. In First Int. Symp. Measurement and Control in Robotics (ISMCR '90), Houston, Texas, 20-22 June 1990.) Subsequent work has clarified and revised the architecture. The four levels proceed from fault tolerance at the actuator level, to fault tolerance via in-parallel chains, to fault tolerance using serial kinematic redundancy, and finally to the fault tolerance multiple arm systems provide. This is a subsumptive architecture because each successive layer can incorporate the fault tolerance provided by all layers beneath. For instance a serially-redundant robot can incorporate dual fault-tolerant actuators. Redundant systems provide the fault tolerance, but the guiding principle of this architecture is that functional redundancies actively increase the performance of the system. Redundancies do not simply remain dormant until needed. This paper includes specific examples of hardware and/or software implementation at all four levels

  16. TS-05: 150 lines of java with high architectural complexity

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    2005-01-01

    In the short time span available in a software architecture course, it is difficult to find a software system that is both interesting from an architectural perspective and so small that it does not overwhelm the students.We present TS-05 which is a bare 150 line Java "toy-system" that never...

  17. Modelling of local/global architectures for second level trigger at the LHC experiment

    International Nuclear Information System (INIS)

    Hajduk, Z.; Iwanski, W.; Korecyl, K.; Strong, J.

    1994-01-01

    Different architectures of the second level triggering system for experiments on LHC have been simulated. The basic scheme was local/global system with distributed computing power. As a tool the authors have used the object-oriented MODSIM II language

  18. High-performance full adder architecture in quantum-dot cellular automata

    Directory of Open Access Journals (Sweden)

    Hamid Rashidi

    2017-06-01

    Full Text Available Quantum-dot cellular automata (QCA is a new and promising computation paradigm, which can be a viable replacement for the complementary metal–oxide–semiconductor technology at nano-scale level. This technology provides a possible solution for improving the computation in various computational applications. Two QCA full adder architectures are presented and evaluated: a new and efficient 1-bit QCA full adder architecture and a 4-bit QCA ripple carry adder (RCA architecture. The proposed architectures are simulated using QCADesigner tool version 2.0.1. These architectures are implemented with the coplanar crossover approach. The simulation results show that the proposed 1-bit QCA full adder and 4-bit QCA RCA architectures utilise 33 and 175 QCA cells, respectively. Our simulation results show that the proposed architectures outperform most results so far in the literature.

  19. High-level-waste immobilization

    International Nuclear Information System (INIS)

    Crandall, J.L.

    1982-01-01

    Analysis of risks, environmental effects, process feasibility, and costs for disposal of immobilized high-level wastes in geologic repositories indicates that the disposal system safety has a low sensitivity to the choice of the waste disposal form

  20. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  1. Research of Smart Grid Cyber Architecture and Standards Deployment with High Adaptability for Security Monitoring

    DEFF Research Database (Denmark)

    Hu, Rui; Hu, Weihao; Chen, Zhe

    2015-01-01

    Security Monitoring is a critical function for smart grid. As a consequence of strongly relying on communication, cyber security must be guaranteed by the specific system. Otherwise, the DR signals and bidding information can be easily forged or intercepted. Customers’ privacy and safety may suffer...... huge losses. Although OpenADR specificationsprovide continuous, secure and reliable two-way communications in application level defined in ISO model, which is also an open architecture for security is adopted by it and no specific or proprietary technologies is restricted to OpenADR itself....... It is significant to develop a security monitoring system. This paper discussed the cyber architecture of smart grid with high adaptability for security monitoring. An adaptable structure with Demilitarized Zone (DMZ) is proposed. Focusing on this network structure, the rational utilization of standards...

  2. High accuracy amplitude and phase measurements based on a double heterodyne architecture

    International Nuclear Information System (INIS)

    Zhao Danyang; Wang Guangwei; Pan Weimin

    2015-01-01

    In the digital low level RF (LLRF) system of a circular (particle) accelerator, the RF field signal is usually down converted to a fixed intermediate frequency (IF). The ratio of IF and sampling frequency determines the processing required, and differs in various LLRF systems. It is generally desirable to design a universally compatible architecture for different IFs with no change to the sampling frequency and algorithm. A new RF detection method based on a double heterodyne architecture for wide IF range has been developed, which achieves the high accuracy requirement of modern LLRF. In this paper, the relation of IF and phase error is systematically analyzed for the first time and verified by experiments. The effects of temperature drift for 16 h IF detection are inhibited by the amplitude and phase calibrations. (authors)

  3. Parametric Approach to Assessing Performance of High-Lift Device Active Flow Control Architectures

    Directory of Open Access Journals (Sweden)

    Yu Cai

    2017-02-01

    Full Text Available Active Flow Control is at present an area of considerable research, with multiple potential aircraft applications. While the majority of research has focused on the performance of the actuators themselves, a system-level perspective is necessary to assess the viability of proposed solutions. This paper demonstrates such an approach, in which major system components are sized based on system flow and redundancy considerations, with the impacts linked directly to the mission performance of the aircraft. Considering the case of a large twin-aisle aircraft, four distinct active flow control architectures that facilitate the simplification of the high-lift mechanism are investigated using the demonstrated approach. The analysis indicates a very strong influence of system total mass flow requirement on architecture performance, both for a typical mission and also over the entire payload-range envelope of the aircraft.

  4. High-speed architecture for the decoding of trellis-coded modulation

    Science.gov (United States)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  5. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    Science.gov (United States)

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  6. Coherent beam combining architectures for high power tapered laser arrays

    Science.gov (United States)

    Schimmel, G.; Janicot, S.; Hanna, M.; Decker, J.; Crump, P.; Erbert, G.; Witte, U.; Traub, M.; Georges, P.; Lucas-Leclin, G.

    2017-02-01

    Coherent beam combining (CBC) aims at increasing the spatial brightness of lasers. It consists in maintaining a constant phase relationship between different emitters, in order to combine them constructively in one single beam. We have investigated the CBC of an array of five individually-addressable high-power tapered laser diodes at λ = 976 nm, in two architectures: the first one utilizes the self-organization of the lasers in an interferometric extended-cavity, which ensures their mutual coherence; the second one relies on the injection of the emitters by a single-frequency laser diode. In both cases, the coherent combining of the phase-locked beams is ensured on the front side of the array by a transmission diffractive grating with 98% efficiency. The passive phase-locking of the laser bar is obtained up to 5 A (per emitter). An optimization algorithm is implemented to find the proper currents in the five ridge sections that ensured the maximum combined power on the front side. Under these conditions we achieve a maximum combined power of 7.5 W. In the active MOPA configuration, we can increase the currents in the tapered sections up to 6 A and get a combined power of 11.5 W, corresponding to a combining efficiency of 76%. It is limited by the beam quality of the tapered emitters and by fast phase fluctuations between emitters. Still, these results confirm the potential of CBC approaches with tapered lasers to provide a high-power and high-brightness beam, and compare with the current state-of-the-art with laser diodes.

  7. Prototype architecture for a VLSI level zero processing system. [Space Station Freedom

    Science.gov (United States)

    Shi, Jianfei; Grebowsky, Gerald J.; Horner, Ward P.; Chesney, James R.

    1989-01-01

    The prototype architecture and implementation of a high-speed level zero processing (LZP) system are discussed. Due to the new processing algorithm and VLSI technology, the prototype LZP system features compact size, low cost, high processing throughput, and easy maintainability and increased reliability. Though extensive control functions have been done by hardware, the programmability of processing tasks makes it possible to adapt the system to different data formats and processing requirements. It is noted that the LZP system can handle up to 8 virtual channels and 24 sources with combined data volume of 15 Gbytes per orbit. For greater demands, multiple LZP systems can be configured in parallel, each called a processing channel and assigned a subset of virtual channels. The telemetry data stream will be steered into different processing channels in accordance with their virtual channel IDs. This super system can cope with a virtually unlimited number of virtual channels and sources. In the near future, it is expected that new disk farms with data rate exceeding 150 Mbps will be available from commercial vendors due to the advance in disk drive technology.

  8. High Level Radioactive Waste Management

    International Nuclear Information System (INIS)

    1991-01-01

    The proceedings of the second annual international conference on High Level Radioactive Waste Management, held on April 28--May 3, 1991, Las Vegas, Nevada, provides information on the current technical issue related to international high level radioactive waste management activities and how they relate to society as a whole. Besides discussing such technical topics as the best form of the waste, the integrity of storage containers, design and construction of a repository, the broader social aspects of these issues are explored in papers on such subjects as conformance to regulations, transportation safety, and public education. By providing this wider perspective of high level radioactive waste management, it becomes apparent that the various disciplines involved in this field are interrelated and that they should work to integrate their waste management activities. Individual records are processed separately for the data bases

  9. High-level Petri Nets

    DEFF Research Database (Denmark)

    various journals and collections. As a result, much of this knowledge is not readily available to people who may be interested in using high-level nets. Within the Petri net community this problem has been discussed many times, and as an outcome this book has been compiled. The book contains reprints...... of some of the most important papers on the application and theory of high-level Petri nets. In this way it makes the relevant literature more available. It is our hope that the book will be a useful source of information and that, e.g., it can be used in the organization of Petri net courses. To make......High-level Petri nets are now widely used in both theoretical analysis and practical modelling of concurrent systems. The main reason for the success of this class of net models is that they make it possible to obtain much more succinct and manageable descriptions than can be obtained by means...

  10. High-Level Radioactive Waste.

    Science.gov (United States)

    Hayden, Howard C.

    1995-01-01

    Presents a method to calculate the amount of high-level radioactive waste by taking into consideration the following factors: the fission process that yields the waste, identification of the waste, the energy required to run a 1-GWe plant for one year, and the uranium mass required to produce that energy. Briefly discusses waste disposal and…

  11. High-level radioactive wastes

    International Nuclear Information System (INIS)

    Grissom, M.C.

    1982-10-01

    This bibliography contains 812 citations on high-level radioactive wastes included in the Department of Energy's Energy Data Base from January 1981 through July 1982. These citations are to research reports, journal articles, books, patents, theses, and conference papers from worldwide sources. Five indexes are provided: Corporate Author, Personal Author, Subject, Contract Number, and Report Number

  12. RPython high-level synthesis

    Science.gov (United States)

    Cieszewski, Radoslaw; Linczuk, Maciej

    2016-09-01

    The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.

  13. Efficient Architectures for Low Latency and High Throughput Trading Systems on the JVM

    Directory of Open Access Journals (Sweden)

    Alexandru LIXANDRU

    2013-01-01

    Full Text Available The motivation for our research starts from the common belief that the Java platform is not suitable for implementing ultra-high performance applications. Java is one of the most widely used software development platform in the world, and it provides the means for rapid development of robust and complex applications that are easy to extend, ensuring short time-to-market of initial deliveries and throughout the lifetime of the system. The Java runtime environment, and especially the Java Virtual Machine, on top of which applications are executed, is the principal source of concerns in regards to its suitability in the electronic trading environment, mainly because of its implicit memory management. In this paper, we intend to identify some of the most common measures that can be taken, both at the Java runtime environment level and at the application architecture level, which can help Java applications achieve ultra-high performance. We also propose two efficient architectures for exchange trading systems that allow for ultra-low latencies and high throughput.

  14. CMS High Level Trigger Timing Measurements

    International Nuclear Information System (INIS)

    Richardson, Clint

    2015-01-01

    The two-level trigger system employed by CMS consists of the Level 1 (L1) Trigger, which is implemented using custom-built electronics, and the High Level Trigger (HLT), a farm of commercial CPUs running a streamlined version of the offline CMS reconstruction software. The operational L1 output rate of 100 kHz, together with the number of CPUs in the HLT farm, imposes a fundamental constraint on the amount of time available for the HLT to process events. Exceeding this limit impacts the experiment's ability to collect data efficiently. Hence, there is a critical need to characterize the performance of the HLT farm as well as the algorithms run prior to start up in order to ensure optimal data taking. Additional complications arise from the fact that the HLT farm consists of multiple generations of hardware and there can be subtleties in machine performance. We present our methods of measuring the timing performance of the CMS HLT, including the challenges of making such measurements. Results for the performance of various Intel Xeon architectures from 2009-2014 and different data taking scenarios are also presented. (paper)

  15. Lifecycle Prognostics Architecture for Selected High-Cost Active Components

    Energy Technology Data Exchange (ETDEWEB)

    N. Lybeck; B. Pham; M. Tawfik; J. B. Coble; R. M. Meyer; P. Ramuhalli; L. J. Bond

    2011-08-01

    There are an extensive body of knowledge and some commercial products available for calculating prognostics, remaining useful life, and damage index parameters. The application of these technologies within the nuclear power community is still in its infancy. Online monitoring and condition-based maintenance is seeing increasing acceptance and deployment, and these activities provide the technological bases for expanding to add predictive/prognostics capabilities. In looking to deploy prognostics there are three key aspects of systems that are presented and discussed: (1) component/system/structure selection, (2) prognostic algorithms, and (3) prognostics architectures. Criteria are presented for component selection: feasibility, failure probability, consequences of failure, and benefits of the prognostics and health management (PHM) system. The basis and methods commonly used for prognostics algorithms are reviewed and summarized. Criteria for evaluating PHM architectures are presented: open, modular architecture; platform independence; graphical user interface for system development and/or results viewing; web enabled tools; scalability; and standards compatibility. Thirteen software products were identified and discussed in the context of being potentially useful for deployment in a PHM program applied to systems in a nuclear power plant (NPP). These products were evaluated by using information available from company websites, product brochures, fact sheets, scholarly publications, and direct communication with vendors. The thirteen products were classified into four groups of software: (1) research tools, (2) PHM system development tools, (3) deployable architectures, and (4) peripheral tools. Eight software tools fell into the deployable architectures category. Of those eight, only two employ all six modules of a full PHM system. Five systems did not offer prognostic estimates, and one system employed the full health monitoring suite but lacked operations and

  16. Lifecycle Prognostics Architecture for Selected High-Cost Active Components

    International Nuclear Information System (INIS)

    Lybeck, N.; Pham, B.; Tawfik, M.; Coble, J.B.; Meyer, R.M.; Ramuhalli, P.; Bond, L.J.

    2011-01-01

    There are an extensive body of knowledge and some commercial products available for calculating prognostics, remaining useful life, and damage index parameters. The application of these technologies within the nuclear power community is still in its infancy. Online monitoring and condition-based maintenance is seeing increasing acceptance and deployment, and these activities provide the technological bases for expanding to add predictive/prognostics capabilities. In looking to deploy prognostics there are three key aspects of systems that are presented and discussed: (1) component/system/structure selection, (2) prognostic algorithms, and (3) prognostics architectures. Criteria are presented for component selection: feasibility, failure probability, consequences of failure, and benefits of the prognostics and health management (PHM) system. The basis and methods commonly used for prognostics algorithms are reviewed and summarized. Criteria for evaluating PHM architectures are presented: open, modular architecture; platform independence; graphical user interface for system development and/or results viewing; web enabled tools; scalability; and standards compatibility. Thirteen software products were identified and discussed in the context of being potentially useful for deployment in a PHM program applied to systems in a nuclear power plant (NPP). These products were evaluated by using information available from company websites, product brochures, fact sheets, scholarly publications, and direct communication with vendors. The thirteen products were classified into four groups of software: (1) research tools, (2) PHM system development tools, (3) deployable architectures, and (4) peripheral tools. Eight software tools fell into the deployable architectures category. Of those eight, only two employ all six modules of a full PHM system. Five systems did not offer prognostic estimates, and one system employed the full health monitoring suite but lacked operations and

  17. Analysis of facility needs level in architecture studio for students’ studio grades

    Science.gov (United States)

    Lubis, A. S.; Hamid, B.; Pane, I. F.; Marpaung, B. O. Y.

    2018-03-01

    Architects must be able to play an active role and contribute to the realization of a sustainable environment. Architectural education has inherited many education research used qualitative and quantitative methods. The data were gathered by conducting (a) observation,(b) interviews, (c) documentation, (d) literature study, and (e) Questionnaire. The gathered data were analyzed qualitatively to find out what equipment needed in the learning process in the Architecture Studio, USU. Questionnaires and Ms. Excel were used for the quantitative analysis. The tabulation of quantitative data would be correlated with the students’ studio grades. The result of the research showed that equipment with the highest level of needs was (1) drawing table, (2) Special room for each student, (3) Internet Network, (4) Air Conditioning, (5) Sufficient lighting.

  18. Particle In Cell Codes on Highly Parallel Architectures

    Science.gov (United States)

    Tableman, Adam

    2014-10-01

    We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.

  19. Architecture and Programming Models for High Performance Intensive Computation

    Science.gov (United States)

    2016-06-29

    commands from the data processing center to the sensors is needed. It has been noted that the ubiquity of mobile communication devices offers the...commands from a Processing Facility by way of mobile Relay Stations. The activity of each component of this model other than the Merge module can be...evaluation of the initial system implementation. Gao also was in charge of the development of Fresh Breeze architecture backend on new many-core computers

  20. FY1995 study of design methodology and environment of high-performance processor architectures; 1995 nendo koseino processor architecture sekkeiho to sekkei kankyo no kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The aim of our project is to develop high-performance processor architectures for both general purpose and application-specific purpose. We also plan to develop basic softwares, such as compliers, and various design aid tools for those architectures. We are particularly interested in performance evaluation at architecture design phase, design optimization, automatic generation of compliers from processor designs, and architecture design methodologies combined with circuit layout. We have investigated both microprocessor architectures and design methodologies / environments for the processors. Our goal is to establish design technologies for high-performance, low-power, low-cost and highly-reliable systems in system-on-silicon era. We have proposed PPRAM architecture for high-performance system using DRAM and logic mixture technology, Softcore processor architecture for special purpose processors in embedded systems, and Power-Pro architecture for low power systems. We also developed design methodologies and design environments for the above architectures as well as a new method for design verification of microprocessors. (NEDO)

  1. High Intensity Laser Power Beaming Architecture for Space and Terrestrial Missions

    Science.gov (United States)

    Nayfeh, Taysir; Fast, Brian; Raible, Daniel; Dinca, Dragos; Tollis, Nick; Jalics, Andrew

    2011-01-01

    High Intensity Laser Power Beaming (HILPB) has been developed as a technique to achieve Wireless Power Transmission (WPT) for both space and terrestrial applications. In this paper, the system architecture and hardware results for a terrestrial application of HILPB are presented. These results demonstrate continuous conversion of high intensity optical energy at near-IR wavelengths directly to electrical energy at output power levels as high as 6.24 W from the single cell 0.8 cm2 aperture receiver. These results are scalable, and may be realized by implementing receiver arraying and utilizing higher power source lasers. This type of system would enable long range optical refueling of electric platforms, such as MUAV s, airships, robotic exploration missions and provide power to spacecraft platforms which may utilize it to drive electric means of propulsion.

  2. Development of the Lymphoma Enterprise Architecture Database: A caBIG(TM Silver Level Compliant System

    Directory of Open Access Journals (Sweden)

    Taoying Huang

    2009-01-01

    Full Text Available Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™ Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™, which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data.

  3. Development of the Lymphoma Enterprise Architecture Database: A caBIG(TM Silver Level Compliant System

    Directory of Open Access Journals (Sweden)

    Taoying Huang

    2009-04-01

    Full Text Available Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™ Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™, which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data.

  4. Development of the Lymphoma Enterprise Architecture Database: A caBIG(tm) Silver level compliant System

    Science.gov (United States)

    Huang, Taoying; Shenoy, Pareen J.; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W.; Flowers, Christopher R.

    2009-01-01

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data. PMID:19492074

  5. Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.

    Science.gov (United States)

    Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R

    2009-04-03

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.

  6. Thread-level parallelization and optimization of NWChem for the Intel MIC architecture

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); de Jong, Wibe [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-01-01

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments. In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant e ort was required to safely and efeciently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI+OpenMP hybrid implementations attain up to 65× better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6× better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.

  7. Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang; Williams, Samuel; Jong, Wibe de; Oliker, Leonid

    2014-10-10

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments. In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in tt native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant effort was required to safely and efficiently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI OpenMP hybrid implementations attain up to 65x better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6x better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.

  8. High-Level Management of Communication Schedules in HPF-like Languages

    National Research Council Canada - National Science Library

    Benkner, Siegfried

    1997-01-01

    ..., providing the users with a high-level language interface for programming scalable parallel architectures and delegating to the compiler the task of producing an explicitly parallel message-passing program...

  9. Transforming the existing building stock to high performed energy efficient and experienced architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    architectural heritage to energy efficiency and from architectural quality to sustainability. The first, second and third renovations are discussed from financial and sustainable view points. The role of housing related to the public energy supply system and the relation between the levels of renovation......The project Sustainable Renovation examines the challenge of the current and future architectural renovation of Danish suburbs which were designed in the period from 1945 to 1973. The research project takes its starting point in the perspectives of energy optimization and the fact that the building...

  10. The Architecture of the CMS Level-1 Trigger Control and Monitoring System

    CERN Document Server

    Magrans de Abril, Marc; Hammer, Josef; Hartl, Christian; Xie, Zhen

    2011-01-01

    The architecture of the Level-1 Trigger Control and Monitoring system for the CMS experiment is presented. This system has been installed and commissioned on the trigger online computers and is currently used for data taking at the LHC. This is a medium-size distributed system that runs over 40 PCs and 200 processes that control about 4000 electronic boards. It has been designed to handle the trigger configuration and monitoring during data taking as well as all communications with the main run control of CMS. Furthermore its design has foreseen the provision of the software infrastructure for detailed testing of the trigger system during beam down time.

  11. Immobilized high-level waste interim storage alternatives generation and analysis and decision report

    International Nuclear Information System (INIS)

    CALMUS, R.B.

    1999-01-01

    This report presents a study of alternative system architectures to provide onsite interim storage for the immobilized high-level waste produced by the Tank Waste Remediation System (TWRS) privatization vendor. It examines the contract and program changes that have occurred and evaluates their impacts on the baseline immobilized high-level waste (IHLW) interim storage strategy. In addition, this report documents the recommended initial interim storage architecture and implementation path forward

  12. Blaze-DEMGPU: Modular high performance DEM framework for the GPU architecture

    Directory of Open Access Journals (Sweden)

    Nicolin Govender

    2016-01-01

    Full Text Available Blaze-DEMGPU is a modular GPU based discrete element method (DEM framework that supports polyhedral shaped particles. The high level performance is attributed to the light weight and Single Instruction Multiple Data (SIMD that the GPU architecture offers. Blaze-DEMGPU offers suitable algorithms to conduct DEM simulations on the GPU and these algorithms can be extended and modified. Since a large number of scientific simulations are particle based, many of the algorithms and strategies for GPU implementation present in Blaze-DEMGPU can be applied to other fields. Blaze-DEMGPU will make it easier for new researchers to use high performance GPU computing as well as stimulate wider GPU research efforts by the DEM community.

  13. Architecture of a Level 1 Track Trigger for the CMS Experiment

    CERN Document Server

    Heintz, Ulrich

    2010-01-01

    The luminosity goal for the Super-LHC is 1035/cm2/s. At this luminosity the number of proton-proton interactions in each beam crossing will be in the hundreds. This will stress many components of the CMS detector. One system that has to be upgraded is the trigger system. To keep the rate at which the level 1 trigger fires manageable, information from the tracker has to be integrated into the level 1 trigger. Current design proposals foresee tracking detectors that perform on-detector filtering to reject hits from low-momentum particles. In order to build a trigger system, the filtered hit data from different layers and sectors of the tracker will have to be transmitted off the detector and brought together in a logic processor that generates trigger tracks within the time window allowed by the level 1 trigger latency. I will describe a possible architecture for the off-detector logic that accomplishes this goal.

  14. Auxins differentially regulate root system architecture and cell cycle protein levels in maize seedlings.

    Science.gov (United States)

    Martínez-de la Cruz, Enrique; García-Ramírez, Elpidio; Vázquez-Ramos, Jorge M; Reyes de la Cruz, Homero; López-Bucio, José

    2015-03-15

    Maize (Zea mays) root system architecture has a complex organization, with adventitious and lateral roots determining its overall absorptive capacity. To generate basic information about the earlier stages of root development, we compared the post-embryonic growth of maize seedlings germinated in water-embedded cotton beds with that of plants obtained from embryonic axes cultivated in liquid medium. In addition, the effect of four different auxins, namely indole-3-acetic acid (IAA), 1-naphthaleneacetic acid (NAA), indole-3-butyric acid (IBA) and 2,4-dichlorophenoxyacetic acid (2,4-D) on root architecture and levels of the heat shock protein HSP101 and the cell cycle proteins CKS1, CYCA1 and CDKA1 were analyzed. Our data show that during the first days after germination, maize seedlings develop several root types with a simultaneous and/or continuous growth. The post-embryonic root development started with the formation of the primary root (PR) and seminal scutellar roots (SSR) and then continued with the formation of adventitious crown roots (CR), brace roots (BR) and lateral roots (LR). Auxins affected root architecture in a dose-response fashion; whereas NAA and IBA mostly stimulated crown root formation, 2,4-D showed a strong repressing effect on growth. The levels of HSP101, CKS1, CYCA1 and CDKA in root and leaf tissues were differentially affected by auxins and interestingly, HSP101 registered an auxin-inducible and root specific expression pattern. Taken together, our results show the timing of early branching patterns of maize and indicate that auxins regulate root development likely through modulation of the HSP101 and cell cycle proteins. Copyright © 2014 Elsevier GmbH. All rights reserved.

  15. Performance Evaluation at the Hardware Architecture Level and the Operating System Kernel Design Level.

    Science.gov (United States)

    1977-12-01

    program utilizing kernel semaphores for synchronization . The Hydra kernel instructions were sampled at random using the hardware monitor. The changes in...thatf r~i~h olvrAt- 1,o;lil armcrl han itf,. own sell of primitive func ions; and c onparinoms acrosns dif fc’rnt opt ratieg ; .emsf is riot possiblc...kcrnel dcsign level is complicated by the fact that each operating system kernel ha. its own set of primitive functions and compari!ons across

  16. Removing high-level contaminants

    International Nuclear Information System (INIS)

    Wallace, Paula

    2013-01-01

    Full text: Using biomimicry, an Australian cleantech innovation making inroads intoChinas's industrial sector offers multiple benefits to miners and processors in Australia. Stephen Shelley, the executive chairman of Creative Water Technology (CWT), was on hand at a recent trade show to explain how his Melbourne company has developed world-class techniques in zero liquid discharge and fractional crystallization of minerals to apply to a wide range of water treatment and recycling applications. “Most existing technologies operate with high energy distillation, filters or biological processing. CWT's appliance uses a low temperature, thermal distillation process known as adiabatic recovery to desalinate, dewater and/or recycle highly saline and highly contaminated waste water,” said Shelley. The technology has been specifically designed to handle the high levels of contaminant that alternative technologies struggle to process, with proven water quality results for feed water samples with TDS levels over 300,000ppm converted to clean water with less than 20ppm. Comparatively, reverse osmosis struggles to process contaminant levels over 70,000ppm effectively. “CWT is able to reclaim up to 97% clean usable water and up to 100% of the contaminants contained in the feed water,” said Shelley, adding that soluble and insoluble contaminants are separately extracted and dried for sale or re-use. In industrial applications CWT has successfully processed feed water with contaminant levels over 650,000 mg/1- without the use of chemicals. “The technology would be suitable for companies in oil exploration and production, mining, smelting, biofuels, textiles and the agricultural and food production sectors,” said Shelley. When compared to a conventional desalination plant, the CWT system is able to capture the value in the brine that most plants discard, not only from the salt but the additional water it contains. “If you recover those two commodities... then you

  17. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  18. High level white noise generator

    International Nuclear Information System (INIS)

    Borkowski, C.J.; Blalock, T.V.

    1979-01-01

    A wide band, stable, random noise source with a high and well-defined output power spectral density is provided which may be used for accurate calibration of Johnson Noise Power Thermometers (JNPT) and other applications requiring a stable, wide band, well-defined noise power spectral density. The noise source is based on the fact that the open-circuit thermal noise voltage of a feedback resistor, connecting the output to the input of a special inverting amplifier, is available at the amplifier output from an equivalent low output impedance caused by the feedback mechanism. The noise power spectral density level at the noise source output is equivalent to the density of the open-circuit thermal noise or a 100 ohm resistor at a temperature of approximately 64,000 Kelvins. The noise source has an output power spectral density that is flat to within 0.1% (0.0043 db) in the frequency range of from 1 KHz to 100 KHz which brackets typical passbands of the signal-processing channels of JNPT's. Two embodiments, one of higher accuracy that is suitable for use as a standards instrument and another that is particularly adapted for ambient temperature operation, are illustrated in this application

  19. High level white noise generator

    Science.gov (United States)

    Borkowski, Casimer J.; Blalock, Theron V.

    1979-01-01

    A wide band, stable, random noise source with a high and well-defined output power spectral density is provided which may be used for accurate calibration of Johnson Noise Power Thermometers (JNPT) and other applications requiring a stable, wide band, well-defined noise power spectral density. The noise source is based on the fact that the open-circuit thermal noise voltage of a feedback resistor, connecting the output to the input of a special inverting amplifier, is available at the amplifier output from an equivalent low output impedance caused by the feedback mechanism. The noise power spectral density level at the noise source output is equivalent to the density of the open-circuit thermal noise or a 100 ohm resistor at a temperature of approximately 64,000 Kelvins. The noise source has an output power spectral density that is flat to within 0.1% (0.0043 db) in the frequency range of from 1 KHz to 100 KHz which brackets typical passbands of the signal-processing channels of JNPT's. Two embodiments, one of higher accuracy that is suitable for use as a standards instrument and another that is particularly adapted for ambient temperature operation, are illustrated in this application.

  20. Hybrid POMDP-BDI Agent Architecture with Online Stochastic Planning and Desires with Changing Intensity Levels

    CSIR Research Space (South Africa)

    Rens, GB

    2015-01-01

    Full Text Available The authors propose an agent architecture which combines Partially observable Markov decision processes (POMDPs) and the belief-desire-intention (BDI) framework have several complementary strengths. The authors propose an agent architecture, which...

  1. Optimizing High Level Waste Disposal

    International Nuclear Information System (INIS)

    Dirk Gombert

    2005-01-01

    If society is ever to reap the potential benefits of nuclear energy, technologists must close the fuel-cycle completely. A closed cycle equates to a continued supply of fuel and safe reactors, but also reliable and comprehensive closure of waste issues. High level waste (HLW) disposal in borosilicate glass (BSG) is based on 1970s era evaluations. This host matrix is very adaptable to sequestering a wide variety of radionuclides found in raffinates from spent fuel reprocessing. However, it is now known that the current system is far from optimal for disposal of the diverse HLW streams, and proven alternatives are available to reduce costs by billions of dollars. The basis for HLW disposal should be reassessed to consider extensive waste form and process technology research and development efforts, which have been conducted by the United States Department of Energy (USDOE), international agencies and the private sector. Matching the waste form to the waste chemistry and using currently available technology could increase the waste content in waste forms to 50% or more and double processing rates. Optimization of the HLW disposal system would accelerate HLW disposition and increase repository capacity. This does not necessarily require developing new waste forms, the emphasis should be on qualifying existing matrices to demonstrate protection equal to or better than the baseline glass performance. Also, this proposed effort does not necessarily require developing new technology concepts. The emphasis is on demonstrating existing technology that is clearly better (reliability, productivity, cost) than current technology, and justifying its use in future facilities or retrofitted facilities. Higher waste processing and disposal efficiency can be realized by performing the engineering analyses and trade-studies necessary to select the most efficient methods for processing the full spectrum of wastes across the nuclear complex. This paper will describe technologies being

  2. High Dynamic Range adaptive ΔΣ-based Focal Plane Array architecture

    KAUST Repository

    Yao, Shun

    2012-10-16

    In this paper, an Adaptive Delta-Sigma based architecture for High Dynamic Range (HDR) Focal Plane Arrays is presented. The noise shaping effect of the Delta-Sigma modulation in the low end, and the distortion noise induced in the high end of Photo-diode current were analyzed in detail. The proposed architecture can extend the DR for about 20N log2 dB at the high end of Photo-diode current with an N bit Up-Down counter. At the low end, it can compensate for the larger readout noise by employing Extended Counting. The Adaptive Delta-Sigma architecture employing a 4-bit Up-Down counter achieved about 160dB in the DR, with a Peak SNR (PSNR) of 80dB at the high end. Compared to the other HDR architectures, the Adaptive Delta-Sigma based architecture provides the widest DR with the best SNR performance in the extended range.

  3. High-rise architecture in Ufa, Russia, based on crystallography canons

    Science.gov (United States)

    Narimanovich Sabitov, Ildar; Radikovna Kudasheva, Dilara; Yaroslavovich Vdovin, Denis

    2018-03-01

    The article considers fundamental steps of high-rise architecture forming stylistic tendencies, based on C. Willis and M. A. Korotich's studies. Crystallographic shaping as a direction is assigned on basis of classification by M. A. Korotich's. This direction is particularly examined and the main high-rise architecture forming aspects on basis of natural polycrystals forming principles are assigned. The article describes crystal forms transformation into an architectural composition, analyzes constructive systems within the framework of CTBUH (Council on Tall Buildings and Urban Habitat) classification, and picks out one of its types as the most optimal for using in buildings-crystals. The last stage of our research is the theoretical principles approbation into an experimental project of high-rise building in Ufa with the description of its contextual dislocation aspects.

  4. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model.

    Directory of Open Access Journals (Sweden)

    Chiao-Ling Lo

    2016-08-01

    Full Text Available Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP. This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross resulted in small haplotype blocks (HB with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS, were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50% of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284 and intronic regions (169 with the least in exon's (4, suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a, excitatory receptors (Grin2a, Gria3, Grip1, neurotransmitters (Pomc, and synapses (Snap29. This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits.

  5. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model.

    Science.gov (United States)

    Lo, Chiao-Ling; Lossie, Amy C; Liang, Tiebing; Liu, Yunlong; Xuei, Xiaoling; Lumeng, Lawrence; Zhou, Feng C; Muir, William M

    2016-08-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits.

  6. High Dynamic Range adaptive ΔΣ-based Focal Plane Array architecture

    KAUST Repository

    Yao, Shun; Kavusi, Sam; Salama, Khaled N.

    2012-01-01

    In this paper, an Adaptive Delta-Sigma based architecture for High Dynamic Range (HDR) Focal Plane Arrays is presented. The noise shaping effect of the Delta-Sigma modulation in the low end, and the distortion noise induced in the high end of Photo

  7. A high level implementation and performance evaluation of level-I asynchronous cache on FPGA

    Directory of Open Access Journals (Sweden)

    Mansi Jhamb

    2017-07-01

    Full Text Available To bridge the ever-increasing performance gap between the processor and the main memory in a cost-effective manner, novel cache designs and implementations are indispensable. Cache is responsible for a major part of energy consumption (approx. 50% of processors. This paper presents a high level implementation of a micropipelined asynchronous architecture of L1 cache. Due to the fact that each cache memory implementation is time consuming and error-prone process, a synthesizable and a configurable model proves out to be of immense help as it aids in generating a range of caches in a reproducible and quick fashion. The micropipelined cache, implemented using C-Elements acts as a distributed message-passing system. The RTL cache model implemented in this paper, comprising of data and instruction caches has a wide array of configurable parameters. In addition to timing robustness our implementation has high average cache throughput and low latency. The implemented architecture comprises of two direct-mapped, write-through caches for data and instruction. The architecture is implemented in a Field Programmable Gate Array (FPGA chip using Very High Speed Integrated Circuit Hardware Description Language (VHSIC HDL along with advanced synthesis and place-and-route tools.

  8. The development of an open architecture control system for CBN high speed grinding

    OpenAIRE

    Silva, E. Jannone da; Biffi, M.; Oliveira, J. F. G. de

    2004-01-01

    The aim of this project is the development of an open architecture controlling (OAC) system to be applied in the high speed grinding process using CBN tools. Besides other features, the system will allow a new monitoring and controlling strategy, by the adoption of open architecture CNC combined with multi-sensors, a PC and third-party software. The OAC system will be implemented in a high speed CBN grinding machine, which is being developed in a partnership between the University of São Paul...

  9. DReAM: Demand Response Architecture for Multi-level District Heating and Cooling Networks

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Saptarshi; Chandan, Vikas; Arya, Vijay; Kar, Koushik

    2017-05-19

    In this paper, we exploit the inherent hierarchy of heat exchangers in District Heating and Cooling (DHC) networks and propose DReAM, a novel Demand Response (DR) architecture for Multi-level DHC networks. DReAM serves to economize system operation while still respecting comfort requirements of individual consumers. Contrary to many present day DR schemes that work on a consumer level granularity, DReAM works at a level of hierarchy above buildings, i.e. substations that supply heat to a group of buildings. This improves the overall DR scalability and reduce the computational complexity. In the first step of the proposed approach, mathematical models of individual substations and their downstream networks are abstracted into appropriately constructed low-complexity structural forms. In the second step, this abstracted information is employed by the utility to perform DR optimization that determines the optimal heat inflow to individual substations rather than buildings, in order to achieve the targeted objectives across the network. We validate the proposed DReAM framework through experimental results under different scenarios on a test network.

  10. Lambda-Based Data Processing Architecture for Two-Level Load Forecasting in Residential Buildings

    Directory of Open Access Journals (Sweden)

    Gde Dharma Nugraha

    2018-03-01

    Full Text Available Building energy management systems (BEMS have been intensively used to manage the electricity consumption of residential buildings more efficiently. However, the dynamic behavior of the occupants introduces uncertainty problems that affect the performance of the BEMS. To address this uncertainty problem, the BEMS may implement load forecasting as one of the BEMS modules. Load forecasting utilizes historical load data to compute model predictions for a specific time in the future. Recently, smart meters have been introduced to collect electricity consumption data. Smart meters not only capture aggregation data, but also individual data that is more frequently close to real-time. The processing of both smart meter data types for load forecasting can enhance the performance of the BEMS when confronted with uncertainty problems. The collection of smart meter data can be processed using a batch approach for short-term load forecasting, while the real-time smart meter data can be processed for very short-term load forecasting, which adjusts the short-term load forecasting to adapt to the dynamic behavior of the occupants. This approach requires different data processing techniques for aggregation and individual of smart meter data. In this paper, we propose Lambda-based data processing architecture to process the different types of smart meter data and implement the two-level load forecasting approach, which combines short-term and very short-term load forecasting techniques on top of our proposed data processing architecture. The proposed approach is expected to enhance the BEMS to address the uncertainty problem in order to process data in less time. Our experiment showed that the proposed approaches improved the accuracy by 7% compared to a typical BEMS with only one load forecasting technique, and had the lowest computation time when processing the smart meter data.

  11. Modular Ultra-High Power Solar Array Architecture, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Deployable Space Systems, Inc. (DSS) will focus the proposed SBIR program on the development of a new highly-modularized and extremely-scalable solar array that...

  12. Modular Ultra-High Power Solar Array Architecture, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Deployable Space Systems (DSS) will focus the proposed Phase 2 SBIR program on the hardware-based development and TRL advance of a highly-modularized and...

  13. A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard

    Science.gov (United States)

    Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid

    2005-07-01

    The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.

  14. A high throughput architecture for a low complexity soft-output demapping algorithm

    Science.gov (United States)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  15. DSP Architecture Design Essentials

    CERN Document Server

    Marković, Dejan

    2012-01-01

    In DSP Architecture Design Essentials, authors Dejan Marković and Robert W. Brodersen cover a key subject for the successful realization of DSP algorithms for communications, multimedia, and healthcare applications. The book addresses the need for DSP architecture design that maps advanced DSP algorithms to hardware in the most power- and area-efficient way. The key feature of this text is a design methodology based on a high-level design model that leads to hardware implementation with minimum power and area. The methodology includes algorithm-level considerations such as automated word-length reduction and intrinsic data properties that can be leveraged to reduce hardware complexity. From a high-level data-flow graph model, an architecture exploration methodology based on linear programming is used to create an array of architectural solutions tailored to the underlying hardware technology. The book is supplemented with online material: bibliography, design examples, CAD tutorials and custom software.

  16. Structural architecture supports functional organization in the human aging brain at a regionwise and network level.

    Science.gov (United States)

    Zimmermann, Joelle; Ritter, Petra; Shen, Kelly; Rothmeier, Simon; Schirner, Michael; McIntosh, Anthony R

    2016-07-01

    Functional interactions in the brain are constrained by the underlying anatomical architecture, and structural and functional networks share network features such as modularity. Accordingly, age-related changes of structural connectivity (SC) may be paralleled by changes in functional connectivity (FC). We provide a detailed qualitative and quantitative characterization of the SC-FC coupling in human aging as inferred from resting-state blood oxygen-level dependent functional magnetic resonance imaging and diffusion-weighted imaging in a sample of 47 adults with an age range of 18-82. We revealed that SC and FC decrease with age across most parts of the brain and there is a distinct age-dependency of regionwise SC-FC coupling and network-level SC-FC relations. A specific pattern of SC-FC coupling predicts age more reliably than does regionwise SC or FC alone (r = 0.73, 95% CI = [0.7093, 0.8522]). Hence, our data propose that regionwise SC-FC coupling can be used to characterize brain changes in aging. Hum Brain Mapp 37:2645-2661, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Optimizing a High Energy Physics (HEP) Toolkit on Heterogeneous Architectures

    CERN Document Server

    Lindal, Yngve Sneen; Jarp, Sverre

    2011-01-01

    A desired trend within high energy physics is to increase particle accelerator luminosities, leading to production of more collision data and higher probabilities of finding interesting physics results. A central data analysis technique used to determine whether results are interesting or not is the maximum likelihood method, and the corresponding evaluation of the negative log-likelihood, which can be computationally expensive. As the amount of data grows, it is important to take benefit from the parallelism in modern computers. This, in essence, means to exploit vector registers and all available cores on CPUs, as well as utilizing co-processors as GPUs. This thesis describes the work done to optimize and parallelize a prototype of a central data analysis tool within the high energy physics community. The work consists of optimizations for multicore processors, GPUs, as well as a mechanism to balance the load between both CPUs and GPUs with the aim to fully exploit the power of modern commodity computers. W...

  18. A State-Based Modeling Approach for Efficient Performance Evaluation of Embedded System Architectures at Transaction Level

    Directory of Open Access Journals (Sweden)

    Anthony Barreteau

    2012-01-01

    Full Text Available Abstract models are necessary to assist system architects in the evaluation process of hardware/software architectures and to cope with the still increasing complexity of embedded systems. Efficient methods are required to create reliable models of system architectures and to allow early performance evaluation and fast exploration of the design space. In this paper, we present a specific transaction level modeling approach for performance evaluation of hardware/software architectures. This approach relies on a generic execution model that exhibits light modeling effort. Created models are used to evaluate by simulation expected processing and memory resources according to various architectures. The proposed execution model relies on a specific computation method defined to improve the simulation speed of transaction level models. The benefits of the proposed approach are highlighted through two case studies. The first case study is a didactic example illustrating the modeling approach. In this example, a simulation speed-up by a factor of 7,62 is achieved by using the proposed computation method. The second case study concerns the analysis of a communication receiver supporting part of the physical layer of the LTE protocol. In this case study, architecture exploration is led in order to improve the allocation of processing functions.

  19. Enhanced high-frequency microwave absorption of Fe3O4 architectures based on porous nanoflake

    DEFF Research Database (Denmark)

    Wang, Xiaoliang; Liu, Yanguo; Han, Hongyan

    2017-01-01

    Hierarchical Fe3O4 architectures assembled with porous nanoplates (p-Fe3O4) were synthesized. Due to the strong shape anisotropy of the nanoplates, the p-Fe3O4 exhibits increased microwave resonance towards high frequency range. The improved microwave absorption properties of the p-Fe3O4, includi...

  20. Enhanced high-frequency microwave absorption of Fe3O4 architectures based on porous nanoflake

    DEFF Research Database (Denmark)

    Wang, Xiaoliang; Liu, Yanguo; Han, Hongyan

    2017-01-01

    Hierarchical Fe3O4 architectures assembled with porous nanoplates (p-Fe3O4) were synthesized. Due to the strong shape anisotropy of the nanoplates, the p-Fe3O4 exhibits increased microwave resonance towards high frequency range. The improved microwave absorption properties of the p-Fe3O4, including...

  1. A design control structure for architectural firms in a highly complex and uncertain situation

    NARCIS (Netherlands)

    Schijlen, J.T.H.A.M.; Otter, den A.F.H.J.; Pels, H.J.

    2011-01-01

    A large architectural firm in a highly complex and uncertain production situation asked to improve its existing ?production control? system for design projects. To that account a research and design project of nine months at the spot was defined. The production control in the organization was based

  2. Research on high availability architecture of SQL and NoSQL

    Science.gov (United States)

    Wang, Zhiguo; Wei, Zhiqiang; Liu, Hao

    2017-03-01

    With the advent of the era of big data, amount and importance of data have increased dramatically. SQL database develops in performance and scalability, but more and more companies tend to use NoSQL database as their databases, because NoSQL database has simpler data model and stronger extension capacity than SQL database. Almost all database designers including SQL database and NoSQL database aim to improve performance and ensure availability by reasonable architecture which can reduce the effects of software failures and hardware failures, so that they can provide better experiences for their customers. In this paper, I mainly discuss the architectures of MySQL, MongoDB, and Redis, which are high available and have been deployed in practical application environment, and design a hybrid architecture.

  3. How do architecture patterns and tactics interact? A model and annotation

    NARCIS (Netherlands)

    Harrison, Neil B.; Avgeriou, Paris

    2010-01-01

    Software architecture designers inevitably work with both architecture patterns and tactics. Architecture patterns describe the high-level structure and behavior of software systems as the solution to multiple system requirements, whereas tactics are design decisions that improve individual quality

  4. Connecting Architecture and Implementation

    Science.gov (United States)

    Buchgeher, Georg; Weinreich, Rainer

    Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.

  5. Architecture of a highly modular lighting simulation system

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk will discuss the challenges before designing a highly modular, parallel, heterogeneous rendering system and their solutions. It will review how different lighting simulation algorithms could be combined to work together using an unified framework. We will discuss how the system can be instrumented for collecting data about the algorithms' runtime performance. The talk includes an overview of how collected data could be visualised in the computational domain of the lighting algorithms and be used for visual debugging and analysis. About the speaker Hristo Lesev has been working in the software industry for the last ten years. He has taken part in delivering a number of desktop and mobile applications. Computer Graphics programming is Hristo's main passion and he has experience writing extensions for 3D software like 3DS Max, Maya, Blender, Sketchup, and V-Ray. Since 2006 Hristo teaches Photorealistic Ray Tracing in the Faculty of Mathematics and Informatics at the Paisii Hilendarski...

  6. Maritime Domain Awareness Architecture Management Hub Strategy

    National Research Council Canada - National Science Library

    2008-01-01

    This document provides an initial high level strategy for carrying out the responsibilities of the national Maritime Domain Awareness Architecture Management Hub to deliver a standards based service...

  7. Multiple Word-Length High-Level Synthesis

    Directory of Open Access Journals (Sweden)

    Coussy Philippe

    2008-01-01

    Full Text Available Abstract Digital signal processing (DSP applications are nowadays widely used and their complexity is ever growing. The design of dedicated hardware accelerators is thus still needed in system-on-chip and embedded systems. Realistic hardware implementation requires first to convert the floating-point data of the initial specification into arbitrary length data (finite-precision while keeping an acceptable computation accuracy. Next, an optimized hardware architecture has to be designed. Considering uniform bit-width specification allows to use traditional automated design flow. However, it leads to oversized design. On the other hand, considering non uniform bit-width specification allows to get a smaller circuit but requires complex design tasks. In this paper, we propose an approach that inputs a C/C++ specification. The design flow, based on high-level synthesis (HLS techniques, automatically generates a potentially pipeline RTL architecture described in VHDL. Both bitaccurate integer and fixed-point data types can be used in the input specification. The generated architecture uses components (operator, register, etc. that have different widths. The design constraints are the clock period and the throughput of the application. The proposed approach considers data word-length information in all the synthesis steps by using dedicated algorithms. We show in this paper the effectiveness of the proposed approach through several design experiments in the DSP domain.

  8. Multiple Word-Length High-Level Synthesis

    Directory of Open Access Journals (Sweden)

    Dominique Heller

    2008-09-01

    Full Text Available Digital signal processing (DSP applications are nowadays widely used and their complexity is ever growing. The design of dedicated hardware accelerators is thus still needed in system-on-chip and embedded systems. Realistic hardware implementation requires first to convert the floating-point data of the initial specification into arbitrary length data (finite-precision while keeping an acceptable computation accuracy. Next, an optimized hardware architecture has to be designed. Considering uniform bit-width specification allows to use traditional automated design flow. However, it leads to oversized design. On the other hand, considering non uniform bit-width specification allows to get a smaller circuit but requires complex design tasks. In this paper, we propose an approach that inputs a C/C++ specification. The design flow, based on high-level synthesis (HLS techniques, automatically generates a potentially pipeline RTL architecture described in VHDL. Both bitaccurate integer and fixed-point data types can be used in the input specification. The generated architecture uses components (operator, register, etc. that have different widths. The design constraints are the clock period and the throughput of the application. The proposed approach considers data word-length information in all the synthesis steps by using dedicated algorithms. We show in this paper the effectiveness of the proposed approach through several design experiments in the DSP domain.

  9. Architecture and control of a high current ion implanter system

    International Nuclear Information System (INIS)

    Bayer, E.H.; Paul, L.F.; Kranik, J.R.

    1979-01-01

    The design of an ion implant system for use in production requires that special attention be given to areas of design which normally are not emphasized on research or development type ion implanters. Manually operated, local controls are replaced by remote controls, automatic sequencing, and digital displays. For ease of maintenance and replication the individual components are designed as simply as possible and are contained in modules of separate identities, joined only by the beam line and electrical interconnections. A production environment also imposes requirements for the control of contamination and maintainability of clean room integrity. For that reason the major portion of the hardware is separated from the clean operator area and is housed in a maintenance core area. The controls of a production system should also be such that relatively unskilled technicians are able to operate the system with optimum repeatability and minimum operator intervention. An extensive interlock system is required. Most important, for use in production the ion implant system has to have a relatively high rate of throughput. Since the rate of throughput at a given dose is a function of beam current, pumpdown time and wafer handling capacity, design of components affecting these parameters has been optimized. Details of the system are given. (U.K.)

  10. Building Design Guidelines of Interior Architecture for Bio safety Levels of Biology Laboratories

    International Nuclear Information System (INIS)

    ElDib, A.A.

    2014-01-01

    This paper discusses the pivotal role of the Interior Architecture As one of the scientific disciplines minute to complete the Architectural Sciences, which relied upon the achievement and development of facilities containing scientific research laboratories, in terms of planning and design, particularly those containing biological laboratories using radioactive materials, adding to that, the application of the materials or raw materials commensurate with each discipline of laboratory and its work nature, and by the discussion the processing of design techniques and requirements of interior architecture dealing with Research Laboratory for electronic circuits an their applications with the making of its prototypes

  11. Achieving High Performance With TCP Over 40 GbE on NUMA Architectures for CMS Data Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Bawej, Tomasz; et al.

    2014-01-01

    TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multicore era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities. During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.

  12. Mixed-Signal Architectures for High-Efficiency and Low-Distortion Digital Audio Processing and Power Amplification

    Directory of Open Access Journals (Sweden)

    Pierangelo Terreni

    2010-01-01

    Full Text Available The paper addresses the algorithmic and architectural design of digital input power audio amplifiers. A modelling platform, based on a meet-in-the-middle approach between top-down and bottom-up design strategies, allows a fast but still accurate exploration of the mixed-signal design space. Different amplifier architectures are configured and compared to find optimal trade-offs among different cost-functions: low distortion, high efficiency, low circuit complexity and low sensitivity to parameter changes. A novel amplifier architecture is derived; its prototype implements digital processing IP macrocells (oversampler, interpolating filter, PWM cross-point deriver, noise shaper, multilevel PWM modulator, dead time compensator on a single low-complexity FPGA while off-chip components are used only for the power output stage (LC filter and power MOS bridge; no heatsink is required. The resulting digital input amplifier features a power efficiency higher than 90% and a total harmonic distortion down to 0.13% at power levels of tens of Watts. Discussions towards the full-silicon integration of the mixed-signal amplifier in embedded devices, using BCD technology and targeting power levels of few Watts, are also reported.

  13. Architectural education and its role in teaching of art education in the second level of elementary schools

    OpenAIRE

    PRAŽANOVÁ, Markéta

    2011-01-01

    The goal of the work was effort to find reasons why to include the education in the field of architecture and environmental culture in teaching systems, mainly in the second level of elementary schools. I tried to apply these reasons into the topics of architecture training in the lessons of art education. The research among nearly 250 pupils of the 8.and 9.class of the elementary schools in big and small towns and last but not least also the discussion with the teachers of art education at e...

  14. High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures

    KAUST Repository

    Ltaief, Hatem

    2013-04-01

    This article presents a new high-performance bidiagonal reduction (BRD) for homogeneous multicore architectures. This article is an extension of the high-performance tridiagonal reduction implemented by the same authors [Luszczek et al., IPDPS 2011] to the BRD case. The BRD is the first step toward computing the singular value decomposition of a matrix, which is one of the most important algorithms in numerical linear algebra due to its broad impact in computational science. The high performance of the BRD described in this article comes from the combination of four important features: (1) tile algorithms with tile data layout, which provide an efficient data representation in main memory; (2) a two-stage reduction approach that allows to cast most of the computation during the first stage (reduction to band form) into calls to Level 3 BLAS and reduces the memory traffic during the second stage (reduction from band to bidiagonal form) by using high-performance kernels optimized for cache reuse; (3) a data dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures that the data dependencies are not violated. A detailed analysis is provided to understand the critical impact of the tile size on the total execution time, which also corresponds to the matrix bandwidth size after the reduction of the first stage. The performance results show a significant improvement over currently established alternatives. The new high-performance BRD achieves up to a 30-fold speedup on a 16-core Intel Xeon machine with a 12000×12000 matrix size against the state-of-the-art open source and commercial numerical software packages, namely LAPACK, compiled with optimized and multithreaded BLAS from MKL as well as Intel MKL version 10.2. © 2013 ACM.

  15. A high-level power model for MPSoC on FPGA

    NARCIS (Netherlands)

    Piscitelli, R.; Pimentel, A.D.

    2011-01-01

    This paper presents a framework for high-level power estimation of multiprocessor systems-on-chip (MPSoC) architectures on FPGA. The technique is based on abstract execution profiles, called event signatures, and it operates at a higher level of abstraction than, e.g., commonly-used instruction-set

  16. FPGA based compute nodes for high level triggering in PANDA

    International Nuclear Information System (INIS)

    Kuehn, W; Gilardi, C; Kirschner, D; Lang, J; Lange, S; Liu, M; Perez, T; Yang, S; Schmitt, L; Jin, D; Li, L; Liu, Z; Lu, Y; Wang, Q; Wei, S; Xu, H; Zhao, D; Korcyl, K; Otwinowski, J T; Salabura, P

    2008-01-01

    PANDA is a new universal detector for antiproton physics at the HESR facility at FAIR/GSI. The PANDA data acquisition system has to handle interaction rates of the order of 10 7 /s and data rates of several 100 Gb/s. FPGA based compute nodes with multi-Gb/s bandwidth capability using the ATCA architecture are designed to handle tasks such as event building, feature extraction and high level trigger processing. Data connectivity is provided via optical links as well as multiple Gb Ethernet ports. The boards will support trigger algorithms such us pattern recognition for RICH detectors, EM shower analysis, fast tracking algorithms and global event characterization. Besides VHDL, high level C-like hardware description languages will be considered to implement the firmware

  17. QSPIN: A High Level Java API for Quantum Computing Experimentation

    Science.gov (United States)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  18. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    Science.gov (United States)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  19. High performance integer arithmetic circuit design on FPGA architecture, implementation and design automation

    CERN Document Server

    Palchaudhuri, Ayan

    2016-01-01

    This book describes the optimized implementations of several arithmetic datapath, controlpath and pseudorandom sequence generator circuits for realization of high performance arithmetic circuits targeted towards a specific family of the high-end Field Programmable Gate Arrays (FPGAs). It explores regular, modular, cascadable, and bit-sliced architectures of these circuits, by directly instantiating the target FPGA-specific primitives in the HDL. Every proposed architecture is justified with detailed mathematical analyses. Simultaneously, constrained placement of the circuit building blocks is performed, by placing the logically related hardware primitives in close proximity to one another by supplying relevant placement constraints in the Xilinx proprietary “User Constraints File”. The book covers the implementation of a GUI-based CAD tool named FlexiCore integrated with the Xilinx Integrated Software Environment (ISE) for design automation of platform-specific high-performance arithmetic circuits from us...

  20. ZnO@TiO2 Architectures for a High Efficiency Dye-Sensitized Solar Cell

    International Nuclear Information System (INIS)

    Lei, Jianfei; Liu, Shuli; Du, Kai; Lv, Shijie; Liu, Chaojie; Zhao, Lingzhi

    2015-01-01

    Graphical Abstract: A fast and improved electrochemical process was reported to fabricate ZnO@TiO 2 heterogeneous architectures with enhanced power conversion efficiency (ƞ = 2.16%). This paper focuses on achieving high dye loading via binding noncorrosive TiO 2 nanocones to the outermost layer, while retaining the excellent electron transport behavior of the ZnO-based internal layer. Display Omitted -- Highlights: • Nanoconic TiO 2 particles are loaded on the surface of aligned ZnO NWs successfully by a liquid phase deposition method. • ZnO@TiO 2 architectures exhibit high efficiency of the DSSCs. -- Abstract: Instead of the spin coating step, an improved electrochemical process is reported in this paper to prepare ZnO seeded substrates and ZnO nanowires (ZnO NWs). Vertically aligned ZnO NWs are deposited electrochemically on the ZnO seeded substrates directly forming backbones for loading nanoconic TiO 2 particles, and hence ZnO@TiO 2 heterogeneous architectures are obtained. When used as photoanode materials of the dye-sensitized solar cells (DSSCs), ZnO@TiO 2 architectures exhibit enhanced power conversion efficiency (PCE) of the DSSCs. Results of the solar cell testing show that addition of TiO 2 shells to the ZnO NWs significantly increases short circuit current (from 2.6 to 4.7 mA cm −2 ), open circuit voltage (from 0.53 V to 0.77 V) and fill factor (from 0.30 to 0.59). The PCE jumped from 0.4% for bare ZnO NWs to 2.16% for ZnO@TiO 2 architectures under 100 mW cm −2 of AM 1.5 G illumination

  1. Non-Planar Nanotube and Wavy Architecture Based Ultra-High Performance Field Effect Transistors

    KAUST Repository

    Hanna, Amir

    2016-11-01

    This dissertation presents a unique concept for a device architecture named the nanotube (NT) architecture, which is capable of higher drive current compared to the Gate-All-Around Nanowire architecture when applied to heterostructure Tunnel Field Effect Transistors. Through the use of inner/outer core-shell gates, heterostructure NT TFET leverages physically larger tunneling area thus achieving higher driver current (ION) and saving real estates by eliminating arraying requirement. We discuss the physics of p-type (Silicon/Indium Arsenide) and n-type (Silicon/Germanium hetero-structure) based TFETs. Numerical TCAD simulations have shown that NT TFETs have 5x and 1.6 x higher normalized ION when compared to GAA NW TFET for p and n-type TFETs, respectively. This is due to the availability of larger tunneling junction cross sectional area, and lower Shockley-Reed-Hall recombination, while achieving sub 60 mV/dec performance for more than 5 orders of magnitude of drain current, thus enabling scaling down of Vdd to 0.5 V. This dissertation also introduces a novel thin-film-transistors architecture that is named the Wavy Channel (WC) architecture, which allows for extending device width by integrating vertical fin-like substrate corrugations giving rise to up to 50% larger device width, without occupying extra chip area. The novel architecture shows 2x higher output drive current per unit chip area when compared to conventional planar architecture. The current increase is attributed to both the extra device width and 50% enhancement in field effect mobility due to electrostatic gating effects. Digital circuits are fabricated to demonstrate the potential of integrating WC TFT based circuits. WC inverters have shown 2× the peak-to-peak output voltage for the same input, and ~2× the operation frequency of the planar inverters for the same peak-to-peak output voltage. WC NAND circuits have shown 2× higher peak-to-peak output voltage, and 3× lower high-to-low propagation

  2. Dynamic Weather Routes Architecture Overview

    Science.gov (United States)

    Eslami, Hassan; Eshow, Michelle

    2014-01-01

    Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.

  3. The FAIR timing master: a discussion of performance requirements and architectures for a high-precision timing system

    International Nuclear Information System (INIS)

    Kreider, M.

    2012-01-01

    Production chains in a particle accelerator are complex structures with many inter-dependencies and multiple paths to consider. This ranges from system initialization and synchronization of numerous machines to interlock handling and appropriate contingency measures like beam dump scenarios. The FAIR facility will employ White-Rabbit, a time based system which delivers an instruction and a corresponding execution time to a machine. In order to meet the deadlines in any given production chain, instructions need to be sent out ahead of time. For this purpose, code execution and message delivery times need to be known in advance. The FAIR Timing Master needs to be reliably capable of satisfying these timing requirements as well as being fault tolerant. Event sequences of recorded production chains indicate that low reaction times to internal and external events and fast, parallel execution are required. This suggests a slim architecture, especially devised for this purpose. Using the thread model of an OS or other high level programs on a generic CPU would be counterproductive when trying to achieve deterministic processing times. This paper deals with the analysis of said requirements as well as a comparison of known processor and virtual machine architectures and the possibilities of parallelization in programmable hardware. In addition, existing proposals at GSI will be checked against these findings. The final goal will be to determine the best instruction set for modeling any given production chain and devising a suitable architecture to execute these models. (authors)

  4. Modelization of three-dimensional bone micro-architecture using Markov random fields with a multi-level clique system

    International Nuclear Information System (INIS)

    Lamotte, T.; Dinten, J.M.; Peyrin, F.

    2004-01-01

    Imaging trabecular bone micro-architecture in vivo non-invasively is still a challenging issue due to the complexity and small size of the structure. Thus, having a realistic 3D model of bone micro-architecture could be useful in image segmentation or image reconstruction. The goal of this work was to develop a 3D model of trabecular bone micro-architecture which can be seen as a problem of texture synthesis. We investigated a statistical model based on 3D Markov Random Fields (MRF's). Due to the Hammersley-Clifford theorem MRF's may equivalently be defined by an energy function on some set of cliques. In order to model 3D binary bone texture images (bone / background), we first used a particular well-known subclass of MRFs: the Ising model. The local energy function at some voxel depends on the closest neighbors of the voxels and on some parameters which control the shape and the proportion of bone. However, simulations yielded textures organized as connected clusters which even when varying the parameters did not approach the complexity of bone micro-architecture. Then, we introduced a second level of cliques taking into account neighbors located at some distance d from the site s and a new set of cliques allowing to control the plate thickness and spacing. The 3D bone texture images generated using the proposed model were analyzed using the usual bone-architecture quantification tools in order to relate the parameters of the MRF model to the characteristic parameters of bone micro-architecture (trabecular spacing, trabecular thickness, number of trabeculae...). (authors)

  5. System architecture for high speed reconstruction in time-of-flight positron tomography

    International Nuclear Information System (INIS)

    Campagnolo, R.E.; Bouvier, A.; Chabanas, L.; Robert, C.

    1985-06-01

    A new generation of Time Of Flight (TOF) positron tomograph with high resolution and high count rate capabilities is under development in our group. After a short recall of the data acquisition process and image reconstruction in a TOF PET camera, we present the data acquisition system which achieves a data transfer rate of 0.8 mega events per second or more if necessary in list mode. We describe the reconstruction process based on a five stages pipe line architecture using home made processors. The expected performance with this architecture is a time reconstruction of six seconds per image (256x256 pixels) of one million events. This time could be reduce to 4 seconds. We conclude with the future developments of the system

  6. A high-throughput readout architecture based on PCI-Express Gen3 and DirectGMA technology

    International Nuclear Information System (INIS)

    Rota, L.; Vogelgesang, M.; Perez, L.E. Ardila; Caselle, M.; Chilingaryan, S.; Dritschler, T.; Zilio, N.; Kopmann, A.; Balzer, M.; Weber, M.

    2016-01-01

    Modern physics experiments produce multi-GB/s data rates. Fast data links and high performance computing stages are required for continuous data acquisition and processing. Because of their intrinsic parallelism and computational power, GPUs emerged as an ideal solution to process this data in high performance computing applications. In this paper we present a high-throughput platform based on direct FPGA-GPU communication. The architecture consists of a Direct Memory Access (DMA) engine compatible with the Xilinx PCI-Express core, a Linux driver for register access, and high- level software to manage direct memory transfers using AMD's DirectGMA technology. Measurements with a Gen3 x8 link show a throughput of 6.4 GB/s for transfers to GPU memory and 6.6 GB/s to system memory. We also assess the possibility of using the architecture in low latency systems: preliminary measurements show a round-trip latency as low as 1 μs for data transfers to system memory, while the additional latency introduced by OpenCL scheduling is the current limitation for GPU based systems. Our implementation is suitable for real-time DAQ system applications ranging from photon science and medical imaging to High Energy Physics (HEP) systems

  7. Architectural and compiler techniques for energy reduction in high-performance microprocessors

    Science.gov (United States)

    Bellas, Nikolaos

    1999-11-01

    The microprocessor industry has started viewing power, along with area and performance, as a decisive design factor in today's microprocessors. The increasing cost of packaging and cooling systems poses stringent requirements on the maximum allowable power dissipation. Most of the research in recent years has focused on the circuit, gate, and register-transfer (RT) levels of the design. In this research, we focus on the software running on a microprocessor and we view the program as a power consumer. Our work concentrates on the role of the compiler in the construction of "power-efficient" code, and especially its interaction with the hardware so that unnecessary processor activity is saved. We propose techniques that use extra hardware features and compiler-driven code transformations that specifically target activity reduction in certain parts of the CPU which are known to be large power and energy consumers. Design for low power/energy at this level of abstraction entails larger energy gains than in the lower stages of the design hierarchy in which the design team has already made the most important design commitments. The role of the compiler in generating code which exploits the processor organization is also fundamental in energy minimization. Hence, we propose a hardware/software co-design paradigm, and we show what code transformations are necessary by the compiler so that "wasted" power in a modern microprocessor can be trimmed. More specifically, we propose a technique that uses an additional mini cache located between the instruction cache (I-Cache) and the CPU core; the mini cache buffers instructions that are nested within loops and are continuously fetched from the I-Cache. This mechanism can create very substantial energy savings, since the I-Cache unit is one of the main power consumers in most of today's high-performance microprocessors. Results are reported for the SPEC95 benchmarks in the R-4400 processor which implements the MIPS2 instruction

  8. Other-than-high-level waste

    International Nuclear Information System (INIS)

    Bray, G.R.

    1976-01-01

    The main emphasis of the work in the area of partitioning transuranic elements from waste has been in the area of high-level liquid waste. But there are ''other-than-high-level wastes'' generated by the back end of the nuclear fuel cycle that are both large in volume and contaminated with significant quantities of transuranic elements. The combined volume of these other wastes is approximately 50 times that of the solidified high-level waste. These other wastes also contain up to 75% of the transuranic elements associated with waste generated by the back end of the fuel cycle. Therefore, any detailed evaluation of partitioning as a viable waste management option must address both high-level wastes and ''other-than-high-level wastes.''

  9. Architecture design of the application software for the low-level RF control system of the free-electron laser at Hamburg

    International Nuclear Information System (INIS)

    Geng, Z.; Ayvazyan, V.; Simrock, S.

    2012-01-01

    The superconducting linear accelerator of the Free-Electron Laser at Hamburg (FLASH) provides high performance electron beams to the lasing system to generate synchrotron radiation to various users. The Low-Level RF (LLRF) system is used to maintain the beam stabilities by stabilizing the RF field in the superconducting cavities with feedback and feed forward algorithms. The LLRF applications are sets of software to perform RF system model identification, control parameters optimization, exception detection and handling, so as to improve the precision, robustness and operability of the LLRF system. In order to implement the LLRF applications in the hardware with multiple distributed processors, an optimized architecture of the software is required for good understandability, maintainability and extendibility. This paper presents the design of the LLRF application software architecture based on the software engineering approach for FLASH. (authors)

  10. L1Track: A fast Level 1 track trigger for the ATLAS high luminosity upgrade

    International Nuclear Information System (INIS)

    Cerri, Alessandro

    2016-01-01

    With the planned high-luminosity upgrade of the LHC (HL-LHC), the ATLAS detector will see its collision rate increase by approximately a factor of 5 with respect to the current LHC operation. The earliest hardware-based ATLAS trigger stage (“Level 1”) will have to provide a higher rejection factor in a more difficult environment: a new improved Level 1 trigger architecture is under study, which includes the possibility of extracting with low latency and high accuracy tracking information in time for the decision taking process. In this context, the feasibility of potential approaches aimed at providing low-latency high-quality tracking at Level 1 is discussed. - Highlights: • HL-LH requires highly performing event selection. • ATLAS is studying the implementation of tracking at the very first trigger level. • Low latency and high-quality seem to be achievable with dedicated hardware and adequate detector readout architecture.

  11. High Performance Motion-Planner Architecture for Hardware-In-the-Loop System Based on Position-Based-Admittance-Control

    Directory of Open Access Journals (Sweden)

    Francesco La Mura

    2018-02-01

    Full Text Available This article focuses on a Hardware-In-the-Loop application developed from the advanced energy field project LIFES50+. The aim is to replicate, inside a wind gallery test facility, the combined effect of aerodynamic and hydrodynamic loads on a floating wind turbine model for offshore energy production, using a force controlled robotic device, emulating floating substructure’s behaviour. In addition to well known real-time Hardware-In-the-Loop (HIL issues, the particular application presented has stringent safety requirements of the HIL equipment and difficult to predict operating conditions, so that extra computational efforts have to be spent running specific safety algorithms and achieving desired performance. To meet project requirements, a high performance software architecture based on Position-Based-Admittance-Control (PBAC is presented, combining low level motion interpolation techniques, efficient motion planning, based on buffer management and Time-base control, and advanced high level safety algorithms, implemented in a rapid real-time control architecture.

  12. Design of a Load-Balancing Architecture For Parallel Firewalls

    National Research Council Canada - National Science Library

    Joyner, William

    1999-01-01

    .... This thesis proposes a load-balancing firewall architecture to meet the Navy's needs. It first conducts an architectural analysis of the problem and then presents a high-level system design as a solution...

  13. A scalable-low cost architecture for high gain beamforming antennas

    KAUST Repository

    Bakr, Omar

    2010-10-01

    Many state-of-the-art wireless systems, such as long distance mesh networks and high bandwidth networks using mm-wave frequencies, require high gain antennas to overcome adverse channel conditions. These networks could be greatly aided by adaptive beamforming antenna arrays, which can significantly simplify the installation and maintenance costs (e.g., by enabling automatic beam alignment). However, building large, low cost beamforming arrays is very complicated. In this paper, we examine the main challenges presented by large arrays, starting from electromagnetic and antenna design and proceeding to the signal processing and algorithms domain. We propose 3-dimensional antenna structures and hybrid RF/digital radio architectures that can significantly reduce the complexity and improve the power efficiency of adaptive array systems. We also present signal processing techniques based on adaptive filtering methods that enhance the robustness of these architectures. Finally, we present computationally efficient vector quantization techniques that significantly improve the interference cancellation capabilities of analog beamforming architectures. © 2010 IEEE.

  14. A scalable-low cost architecture for high gain beamforming antennas

    KAUST Repository

    Bakr, Omar; Johnson, Mark; Jungdong Park,; Adabi, Ehsan; Jones, Kevin; Niknejad, Ali

    2010-01-01

    Many state-of-the-art wireless systems, such as long distance mesh networks and high bandwidth networks using mm-wave frequencies, require high gain antennas to overcome adverse channel conditions. These networks could be greatly aided by adaptive beamforming antenna arrays, which can significantly simplify the installation and maintenance costs (e.g., by enabling automatic beam alignment). However, building large, low cost beamforming arrays is very complicated. In this paper, we examine the main challenges presented by large arrays, starting from electromagnetic and antenna design and proceeding to the signal processing and algorithms domain. We propose 3-dimensional antenna structures and hybrid RF/digital radio architectures that can significantly reduce the complexity and improve the power efficiency of adaptive array systems. We also present signal processing techniques based on adaptive filtering methods that enhance the robustness of these architectures. Finally, we present computationally efficient vector quantization techniques that significantly improve the interference cancellation capabilities of analog beamforming architectures. © 2010 IEEE.

  15. Control Architecture for Intentional Island Operation in Distribution Network with High Penetration of Distributed Generation

    DEFF Research Database (Denmark)

    Chen, Yu

    , the feasibility of the application of Artificial Neural Network (ANN) to ICA is studied, in order to improve the computation efficiency for ISR calculation. Finally, the integration of ICA into Dynamic Security Assessment (DSA), the ICA implementation, and the development of ICA are discussed....... to utilize them for maintaining the security of the power supply under the emergency situations, has been of great interest for study. One proposal is the intentional island operation. This PhD project is intended to develop a control architecture for the island operation in distribution system with high...... amount of DGs. As part of the NextGen project, this project focuses on the system modeling and simulation regarding the control architecture and recommends the development of a communication and information exchange system based on IEC 61850. This thesis starts with the background of this PhD project...

  16. Implementation of high-speed–low-power adaptive finite impulse response filter with novel architecture

    Directory of Open Access Journals (Sweden)

    Manish Jaiswal

    2015-03-01

    Full Text Available An energy efficient high-speed adaptive finite impulse response filter with novel architecture is developed. Synthesis results along with novel architecture on different complementary metal–oxide semiconductor (CMOS families are presented. Analysis is performed using Artix-7, Spartan-6 and Virtex-4 for most popular adaptive least mean square filter for different orders such as N = 8, 16, 32. The presented work is done using MATLAB (2013b and Xilinx (14.2. From the synthesis results, it can be found that CMOS (28 nm achieves the lowest power and critical path delay compared to others, and thus proves its efficiency in terms of energy. Different parameters are considered such as look up tables and input–output blocks, along with their optimised results.

  17. SIGWX Charts - High Level Significant Weather

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — High level significant weather (SIGWX) forecasts are provided for the en-route portion of international flights. NOAA's National Weather Service Aviation Center...

  18. Advances in quantum control of three-level superconducting circuit architectures

    Energy Technology Data Exchange (ETDEWEB)

    Falci, G.; Paladino, E. [Dipartimento di Fisica e Astronomia, Universita di Catania (Italy); CNR-IMM UOS Universita (MATIS), Consiglio Nazionale delle Ricerche, Catania (Italy); INFN, Sezione di Catania (Italy); Di Stefano, P.G. [Dipartimento di Fisica e Astronomia, Universita di Catania (Italy); Centre for Theoretical Atomic, Molecular and Optical Physics, School of Mathematics and Physics, Queen' s University Belfast(United Kingdom); Ridolfo, A.; D' Arrigo, A. [Dipartimento di Fisica e Astronomia, Universita di Catania (Italy); Paraoanu, G.S. [Low Temperature Laboratory, Department of Applied Physics, Aalto University School of Science (Finland)

    2017-06-15

    Advanced control in Lambda (Λ) scheme of a solid state architecture of artificial atoms and quantized modes would allow the translation to the solid-state realm of a whole class of phenomena from quantum optics, thus exploiting new physics emerging in larger integrated quantum networks and for stronger couplings. However control solid-state devices has constraints coming from selection rules, due to symmetries which on the other hand yield protection from decoherence, and from design issues, for instance that coupling to microwave cavities is not directly switchable. We present two new schemes for the Λ-STIRAP control problem with the constraint of one or two classical driving fields being always-on. We show how these protocols are converted to apply to circuit-QED architectures. We finally illustrate an application to coherent spectroscopy of the so called ultrastrong atom-cavity coupling regime. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  19. Making sense of mobile health data: an open architecture to improve individual- and population-level health.

    Science.gov (United States)

    Chen, Connie; Haddad, David; Selsky, Joshua; Hoffman, Julia E; Kravitz, Richard L; Estrin, Deborah E; Sim, Ida

    2012-08-09

    Mobile phones and devices, with their constant presence, data connectivity, and multiple intrinsic sensors, can support around-the-clock chronic disease prevention and management that is integrated with daily life. These mobile health (mHealth) devices can produce tremendous amounts of location-rich, real-time, high-frequency data. Unfortunately, these data are often full of bias, noise, variability, and gaps. Robust tools and techniques have not yet been developed to make mHealth data more meaningful to patients and clinicians. To be most useful, health data should be sharable across multiple mHealth applications and connected to electronic health records. The lack of data sharing and dearth of tools and techniques for making sense of health data are critical bottlenecks limiting the impact of mHealth to improve health outcomes. We describe Open mHealth, a nonprofit organization that is building an open software architecture to address these data sharing and "sense-making" bottlenecks. Our architecture consists of open source software modules with well-defined interfaces using a minimal set of common metadata. An initial set of modules, called InfoVis, has been developed for data analysis and visualization. A second set of modules, our Personal Evidence Architecture, will support scientific inferences from mHealth data. These Personal Evidence Architecture modules will include standardized, validated clinical measures to support novel evaluation methods, such as n-of-1 studies. All of Open mHealth's modules are designed to be reusable across multiple applications, disease conditions, and user populations to maximize impact and flexibility. We are also building an open community of developers and health innovators, modeled after the open approach taken in the initial growth of the Internet, to foster meaningful cross-disciplinary collaboration around new tools and techniques. An open mHealth community and architecture will catalyze increased mHealth efficiency

  20. Recovering method for high level radioactive material

    International Nuclear Information System (INIS)

    Fukui, Toshiki

    1998-01-01

    Offgas filters such as of nuclear fuel reprocessing facilities and waste control facilities are burnt, and the burnt ash is melted by heating, and then the molten ashes are brought into contact with a molten metal having a low boiling point to transfer the high level radioactive materials in the molten ash to the molten metal. Then, only the molten metal is evaporated and solidified by drying, and residual high level radioactive materials are recovered. According to this method, the high level radioactive materials in the molten ashes are transferred to the molten metal and separated by the difference of the distribution rate of the molten ash and the molten metal. Subsequently, the molten metal to which the high level radioactive materials are transferred is heated to a temperature higher than the boiling point so that only the molten metal is evaporated and dried to be removed, and residual high level radioactive materials are recovered easily. On the other hand, the molten ash from which the high level radioactive material is removed can be discarded as ordinary industrial wastes as they are. (T.M.)

  1. Wavy channel thin film transistor architecture for area efficient, high performance and low power displays

    KAUST Repository

    Hanna, Amir

    2013-12-23

    We demonstrate a new thin film transistor (TFT) architecture that allows expansion of the device width using continuous fin features - termed as wavy channel (WC) architecture. This architecture allows expansion of transistor width in a direction perpendicular to the substrate, thus not consuming extra chip area, achieving area efficiency. The devices have shown for a 13% increase in the device width resulting in a maximum 2.5× increase in \\'ON\\' current value of the WCTFT, when compared to planar devices consuming the same chip area, while using atomic layer deposition based zinc oxide (ZnO) as the channel material. The WCTFT devices also maintain similar \\'OFF\\' current value, ~100 pA, when compared to planar devices, thus not compromising on power consumption for performance which usually happens with larger width devices. This work offers an interesting opportunity to use WCTFTs as backplane circuitry for large-area high-resolution display applications. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Real-time TPC analysis with the ALICE High-Level Trigger

    International Nuclear Information System (INIS)

    Lindenstruth, V.; Loizides, C.; Roehrich, D.; Skaali, B.; Steinbeck, T.; Stock, R.; Tilsner, H.; Ullaland, K.; Vestboe, A.; Vik, T.

    2004-01-01

    The ALICE High-Level Trigger processes data online, to either select interesting (sub-) events, or to compress data efficiently by modeling techniques. Focusing on the main data source, the Time Projection Chamber, the architecture of the system and the current state of the tracking and compression methods are outlined

  3. A high-level power model for MPSoC on FPGA

    NARCIS (Netherlands)

    Piscitelli, R.; Pimentel, A.D.

    2012-01-01

    This paper presents a framework for high-level power estimation of multiprocessor systems-on-chip (MPSoC) architectures on FPGA. The technique is based on abstract execution profiles, called event signatures. As a result, it is capable of achieving good evaluation performance, thereby making the

  4. Disposal of high level and intermediate level radioactive wastes

    International Nuclear Information System (INIS)

    Flowers, R.H.

    1991-01-01

    The waste products from the nuclear industry are relatively small in volume. Apart from a few minor gaseous and liquid waste streams, containing readily dispersible elements of low radiotoxicity, all these products are processed into stable solid packages for disposal in underground repositories. Because the volumes are small, and because radioactive wastes are latecomers on the industrial scene, a whole new industry with a world-wide technological infrastructure has grown up alongside the nuclear power industry to carry out the waste processing and disposal to very high standards. Some of the technical approaches used, and the Regulatory controls which have been developed, will undoubtedly find application in the future to the management of non-radioactive toxic wastes. The repository site outlined would contain even high-level radioactive wastes and spent fuels being contained without significant radiation dose rates to the public. Water pathway dose rates are likely to be lowest for vitrified high-level wastes with spent PWR fuel and intermediate level wastes being somewhat higher. (author)

  5. An energy efficient and high speed architecture for convolution computing based on binary resistive random access memory

    Science.gov (United States)

    Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.

  6. Current high-level waste solidification technology

    International Nuclear Information System (INIS)

    Bonner, W.F.; Ross, W.A.

    1976-01-01

    Technology has been developed in the U.S. and abroad for solidification of high-level waste from nuclear power production. Several processes have been demonstrated with actual radioactive waste and are now being prepared for use in the commercial nuclear industry. Conversion of the waste to a glass form is favored because of its high degree of nondispersibility and safety

  7. High-Level Application Framework for LCLS

    Energy Technology Data Exchange (ETDEWEB)

    Chu, P; Chevtsov, S.; Fairley, D.; Larrieu, C.; Rock, J.; Rogind, D.; White, G.; Zalazny, M.; /SLAC

    2008-04-22

    A framework for high level accelerator application software is being developed for the Linac Coherent Light Source (LCLS). The framework is based on plug-in technology developed by an open source project, Eclipse. Many existing functionalities provided by Eclipse are available to high-level applications written within this framework. The framework also contains static data storage configuration and dynamic data connectivity. Because the framework is Eclipse-based, it is highly compatible with any other Eclipse plug-ins. The entire infrastructure of the software framework will be presented. Planned applications and plug-ins based on the framework are also presented.

  8. Confabulation Based Real-time Anomaly Detection for Wide-area Surveillance Using Heterogeneous High Performance Computing Architecture

    Science.gov (United States)

    2015-06-01

    CONFABULATION BASED REAL-TIME ANOMALY DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE SYRACUSE...DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-12-1-0251 5b. GRANT...processors including graphic processor units (GPUs) and Intel Xeon Phi processors. Experimental results showed significant speedups, which can enable

  9. Nest-like LiFePO4/C architectures for high performance lithium ion batteries

    International Nuclear Information System (INIS)

    Deng Honggui; Jin Shuangling; Zhan Liang; Qiao Wenming; Ling Licheng

    2012-01-01

    Highlights: ► Nest-like LiFePO 4 /C architectures (nest-like LPCs) were synthesized by solvothermal method. ► The microstructures of nest-like LPCs are very stable constructed by many nanosheets. ► The unique structures offer nest-like LPC electrode with high rate performance. ► The reversible capacity of nest-like LPCs electrode is as high as 120 mAh g −1 at 10 C. - Abstract: A novel kind of microsized nest-like LiFePO 4 /C architectures was synthesized by solvothermal method using inexpensive and stable Fe 3+ salt as iron source and ethylene glycol as mediate. A layer of carbon could be coated directly on the surface of LiFePO 4 crystals and the nest-like unique structures offer the cathode materials with high reversible capacity, excellent cycling stability and high rate performance. The reversible capacity can maintain 159 mAh g −1 at 0.1 C and 120 mAh g −1 at 10 C.

  10. A high efficiency readout architecture for a large matrix of pixels

    International Nuclear Information System (INIS)

    Gabrielli, A; Giorgi, F; Villa, M

    2010-01-01

    In this work we present a fast readout architecture for silicon pixel matrix sensors that has been designed to sustain very high rates, above 1 MHz/mm 2 for matrices greater than 80k pixels. This logic can be implemented within MAPS (Monolithic Active Pixel Sensors), a kind of high resolution sensor that integrates on the same bulk the sensor matrix and the CMOS logic for readout, but it can be exploited also with other technologies. The proposed architecture is based on three main concepts. First of all, the readout of the hits is performed by activating one column at a time; all the fired pixels on the active column are read, sparsified and reset in parallel in one clock cycle. This implies the use of global signals across the sensor matrix. The consequent reduction of metal interconnections improves the active area while maintaining a high granularity (down to a pixel pitch of 40 μm). Secondly, the activation for readout takes place only for those columns overlapping with a certain fired area, thus reducing the sweeping time of the whole matrix and reducing the pixel dead-time. Third, the sparsification (x-y address labeling of the hits) is performed with a lower granularity with respect to single pixels, by addressing vertical zones of 8 pixels each. The fine-grain Y resolution is achieved by appending the zone pattern to the zone address of a hit. We show then the benefits of this technique in presence of clusters. We describe this architecture from a schematic point of view, then presenting the efficiency results obtained by VHDL simulations.

  11. High-Resolution Spore Coat Architecture and Assembly of Bacillus Spores

    Energy Technology Data Exchange (ETDEWEB)

    Malkin, A J; Elhadj, S; Plomp, M

    2011-03-14

    Elucidating the molecular architecture of bacterial and cellular surfaces and its structural dynamics is essential to understanding mechanisms of pathogenesis, immune response, physicochemical interactions, environmental resistance, and provide the means for identifying spore formulation and processing attributes. I will discuss the application of in vitro atomic force microscopy (AFM) for studies of high-resolution coat architecture and assembly of several Bacillus spore species. We have demonstrated that bacterial spore coat structures are phylogenetically and growth medium determined. We have proposed that strikingly different species-dependent coat structures of bacterial spore species are a consequence of sporulation media-dependent nucleation and crystallization mechanisms that regulate the assembly of the outer spore coat. Spore coat layers were found to exhibit screw dislocations and two-dimensional nuclei typically observed on inorganic and macromolecular crystals. This presents the first case of non-mineral crystal growth patterns being revealed for a biological organism, which provides an unexpected example of nature exploiting fundamental materials science mechanisms for the morphogenetic control of biological ultrastructures. We have discovered and validated, distinctive formulation-specific high-resolution structural spore coat and dimensional signatures of B. anthracis spores (Sterne strain) grown in different formulation condition. We further demonstrated that measurement of the dimensional characteristics of B. anthracis spores provides formulation classification and sample matching with high sensitivity and specificity. I will present data on the development of an AFM-based immunolabeling technique for the proteomic mapping of macromolecular structures on the B. anthracis surfaces. These studies demonstrate that AFM can probe microbial surface architecture, environmental dynamics and the life cycle of bacterial and cellular systems at near

  12. A high efficiency readout architecture for a large matrix of pixels.

    Science.gov (United States)

    Gabrielli, A.; Giorgi, F.; Villa, M.

    2010-07-01

    In this work we present a fast readout architecture for silicon pixel matrix sensors that has been designed to sustain very high rates, above 1 MHz/mm2 for matrices greater than 80k pixels. This logic can be implemented within MAPS (Monolithic Active Pixel Sensors), a kind of high resolution sensor that integrates on the same bulk the sensor matrix and the CMOS logic for readout, but it can be exploited also with other technologies. The proposed architecture is based on three main concepts. First of all, the readout of the hits is performed by activating one column at a time; all the fired pixels on the active column are read, sparsified and reset in parallel in one clock cycle. This implies the use of global signals across the sensor matrix. The consequent reduction of metal interconnections improves the active area while maintaining a high granularity (down to a pixel pitch of 40 μm). Secondly, the activation for readout takes place only for those columns overlapping with a certain fired area, thus reducing the sweeping time of the whole matrix and reducing the pixel dead-time. Third, the sparsification (x-y address labeling of the hits) is performed with a lower granularity with respect to single pixels, by addressing vertical zones of 8 pixels each. The fine-grain Y resolution is achieved by appending the zone pattern to the zone address of a hit. We show then the benefits of this technique in presence of clusters. We describe this architecture from a schematic point of view, then presenting the efficiency results obtained by VHDL simulations.

  13. Power-efficient computer architectures recent advances

    CERN Document Server

    Själander, Magnus; Kaxiras, Stefanos

    2014-01-01

    As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp

  14. High-performance computing on the Intel Xeon Phi how to fully exploit MIC architectures

    CERN Document Server

    Wang, Endong; Shen, Bo; Zhang, Guangyong; Lu, Xiaowei; Wu, Qing; Wang, Yajuan

    2014-01-01

    The aim of this book is to explain to high-performance computing (HPC) developers how to utilize the Intel® Xeon Phi™ series products efficiently. To that end, it introduces some computing grammar, programming technology and optimization methods for using many-integrated-core (MIC) platforms and also offers tips and tricks for actual use, based on the authors' first-hand optimization experience.The material is organized in three sections. The first section, "Basics of MIC", introduces the fundamentals of MIC architecture and programming, including the specific Intel MIC programming environment

  15. Two-dimensional systolic-array architecture for pixel-level vision tasks

    Science.gov (United States)

    Vijverberg, Julien A.; de With, Peter H. N.

    2010-05-01

    This paper presents ongoing work on the design of a two-dimensional (2D) systolic array for image processing. This component is designed to operate on a multi-processor system-on-chip. In contrast with other 2D systolic-array architectures and many other hardware accelerators, we investigate the applicability of executing multiple tasks in a time-interleaved fashion on the Systolic Array (SA). This leads to a lower external memory bandwidth and better load balancing of the tasks on the different processing tiles. To enable the interleaving of tasks, we add a shadow-state register for fast task switching. To reduce the number of accesses to the external memory, we propose to share the communication assist between consecutive tasks. A preliminary, non-functional version of the SA has been synthesized for an XV4S25 FPGA device and yields a maximum clock frequency of 150 MHz requiring 1,447 slices and 5 memory blocks. Mapping tasks from video content-analysis applications from literature on the SA yields reductions in the execution time of 1-2 orders of magnitude compared to the software implementation. We conclude that the choice for an SA architecture is useful, but a scaled version of the SA featuring less logic with fewer processing and pipeline stages yielding a lower clock frequency, would be sufficient for a video analysis system-on-chip.

  16. A hardware architecture for real-time shadow removal in high-contrast video

    Science.gov (United States)

    Verdugo, Pablo; Pezoa, Jorge E.; Figueroa, Miguel

    2017-09-01

    Broadcasting an outdoor sports event at daytime is a challenging task due to the high contrast that exists between areas in the shadow and light conditions within the same scene. Commercial cameras typically do not handle the high dynamic range of such scenes in a proper manner, resulting in broadcast streams with very little shadow detail. We propose a hardware architecture for real-time shadow removal in high-resolution video, which reduces the shadow effect and simultaneously improves shadow details. The algorithm operates only on the shadow portions of each video frame, thus improving the results and producing more realistic images than algorithms that operate on the entire frame, such as simplified Retinex and histogram shifting. The architecture receives an input in the RGB color space, transforms it into the YIQ space, and uses color information from both spaces to produce a mask of the shadow areas present in the image. The mask is then filtered using a connected components algorithm to eliminate false positives and negatives. The hardware uses pixel information at the edges of the mask to estimate the illumination ratio between light and shadow in the image, which is then used to correct the shadow area. Our prototype implementation simultaneously processes up to 7 video streams of 1920×1080 pixels at 60 frames per second on a Xilinx Kintex-7 XC7K325T FPGA.

  17. High-Precision Phenotyping of Grape Bunch Architecture Using Fast 3D Sensor and Automation

    Directory of Open Access Journals (Sweden)

    Florian Rist

    2018-03-01

    Full Text Available Wine growers prefer cultivars with looser bunch architecture because of the decreased risk for bunch rot. As a consequence, grapevine breeders have to select seedlings and new cultivars with regard to appropriate bunch traits. Bunch architecture is a mosaic of different single traits which makes phenotyping labor-intensive and time-consuming. In the present study, a fast and high-precision phenotyping pipeline was developed. The optical sensor Artec Spider 3D scanner (Artec 3D, L-1466, Luxembourg was used to generate dense 3D point clouds of grapevine bunches under lab conditions and an automated analysis software called 3D-Bunch-Tool was developed to extract different single 3D bunch traits, i.e., the number of berries, berry diameter, single berry volume, total volume of berries, convex hull volume of grapes, bunch width and bunch length. The method was validated on whole bunches of different grapevine cultivars and phenotypic variable breeding material. Reliable phenotypic data were obtained which show high significant correlations (up to r2 = 0.95 for berry number compared to ground truth data. Moreover, it was shown that the Artec Spider can be used directly in the field where achieved data show comparable precision with regard to the lab application. This non-invasive and non-contact field application facilitates the first high-precision phenotyping pipeline based on 3D bunch traits in large plant sets.

  18. High-Precision Phenotyping of Grape Bunch Architecture Using Fast 3D Sensor and Automation.

    Science.gov (United States)

    Rist, Florian; Herzog, Katja; Mack, Jenny; Richter, Robert; Steinhage, Volker; Töpfer, Reinhard

    2018-03-02

    Wine growers prefer cultivars with looser bunch architecture because of the decreased risk for bunch rot. As a consequence, grapevine breeders have to select seedlings and new cultivars with regard to appropriate bunch traits. Bunch architecture is a mosaic of different single traits which makes phenotyping labor-intensive and time-consuming. In the present study, a fast and high-precision phenotyping pipeline was developed. The optical sensor Artec Spider 3D scanner (Artec 3D, L-1466, Luxembourg) was used to generate dense 3D point clouds of grapevine bunches under lab conditions and an automated analysis software called 3D-Bunch-Tool was developed to extract different single 3D bunch traits, i.e., the number of berries, berry diameter, single berry volume, total volume of berries, convex hull volume of grapes, bunch width and bunch length. The method was validated on whole bunches of different grapevine cultivars and phenotypic variable breeding material. Reliable phenotypic data were obtained which show high significant correlations (up to r² = 0.95 for berry number) compared to ground truth data. Moreover, it was shown that the Artec Spider can be used directly in the field where achieved data show comparable precision with regard to the lab application. This non-invasive and non-contact field application facilitates the first high-precision phenotyping pipeline based on 3D bunch traits in large plant sets.

  19. Fuzzy logic controller architecture for water level control in nuclear power plant steam generator using ANFIS training method

    International Nuclear Information System (INIS)

    Vosoughi, Naser; Ekrami, AmirHasan; Naseri, Zahra

    2003-01-01

    Since suitable control of water level can greatly enhance the operation of a power station, a fuzzy logic controller is applied to control the steam generator water level in a pressurized water reactor. The method does not require a detailed mathematical model of the object to be controlled. It is shown that two inputs, a single output and the least number of rules (9 rules) are considered for a controller, and the ANFIS training method is employed to model functions in a controlled system. By using ANFIS training method, initial membership functions will be trained and appropriate functions are generated to control water level inside the steam generator while using the stated rules. The proposed architecture can construct an input-output mapping based on both human knowledge (in the from of fuzzy if - then rules) and stipulated input-output data. This fuzzy logic controller is applied to the steam generator level control by computer simulations. The simulation results confirm the excellent performance of this control architecture in compare with a well-turned PID controller. (author)

  20. A high-speed DAQ framework for future high-level trigger and event building clusters

    International Nuclear Information System (INIS)

    Caselle, M.; Perez, L.E. Ardila; Balzer, M.; Dritschler, T.; Kopmann, A.; Mohr, H.; Rota, L.; Vogelgesang, M.; Weber, M.

    2017-01-01

    Modern data acquisition and trigger systems require a throughput of several GB/s and latencies of the order of microseconds. To satisfy such requirements, a heterogeneous readout system based on FPGA readout cards and GPU-based computing nodes coupled by InfiniBand has been developed. The incoming data from the back-end electronics is delivered directly into the internal memory of GPUs through a dedicated peer-to-peer PCIe communication. High performance DMA engines have been developed for direct communication between FPGAs and GPUs using 'DirectGMA (AMD)' and 'GPUDirect (NVIDIA)' technologies. The proposed infrastructure is a candidate for future generations of event building clusters, high-level trigger filter farms and low-level trigger system. In this paper the heterogeneous FPGA-GPU architecture will be presented and its performance be discussed.

  1. Non-Planar Nanotube and Wavy Architecture Based Ultra-High Performance Field Effect Transistors

    KAUST Repository

    Hanna, Amir

    2016-01-01

    This dissertation also introduces a novel thin-film-transistors architecture that is named the Wavy Channel (WC) architecture, which allows for extending device width by integrating vertical fin-like substrate corrugations giving

  2. The management of high-level radioactive wastes

    International Nuclear Information System (INIS)

    Lennemann, Wm.L.

    1979-01-01

    The definition of high-level radioactive wastes is given. The following aspects of high-level radioactive wastes' management are discussed: fuel reprocessing and high-level waste; storage of high-level liquid waste; solidification of high-level waste; interim storage of solidified high-level waste; disposal of high-level waste; disposal of irradiated fuel elements as a waste

  3. Design and Verification of Digital Architecture of 65K Pixel Readout Chip for High-Energy Physics

    CERN Document Server

    Poikela, Tuomas; Paakkulainen, J

    2010-01-01

    The feasibility to design and implement a front-end ASIC for the upgrade of the VELO detector of LHCb experiment at CERN using IBM’s 130nm standard CMOS process and a standard cell library is studied in this thesis. The proposed architecture is a design to cope with high data rates and continuous data taking. The architecture is designed to operate without any external trigger to record every hit signal the ASIC receives from a sensor chip, and then to transmit the information to the next level of electronics, for example to FPGAs. This thesis focuses on design, implementation and functional verification of the digital electronics of the active pixel area. The area requirements are dictated by the geometry of pixels (55$mu$m x 55$mu$m), power requirements (20W/module) by restricted cooling capabilities of the module consisting of 10 chips and output bandwidth requirements by data rate (< 10 Gbit/s) produced by a particle flux passing through the chip. The design work was carried out using transaction...

  4. Integration of highly probabilistic sources into optical quantum architectures: perpetual quantum computation

    International Nuclear Information System (INIS)

    Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae

    2011-01-01

    In this paper, we introduce a design for an optical topological cluster state computer constructed exclusively from a single quantum component. Unlike previous efforts we eliminate the need for on demand, high fidelity photon sources and detectors and replace them with the same device utilized to create photon/photon entanglement. This introduces highly probabilistic elements into the optical architecture while maintaining complete specificity of the structure and operation for a large-scale computer. Photons in this system are continually recycled back into the preparation network, allowing for an arbitrarily deep three-dimensional cluster to be prepared using a comparatively small number of photonic qubits and consequently the elimination of high-frequency, deterministic photon sources.

  5. High-level radioactive wastes. Supplement 1

    International Nuclear Information System (INIS)

    McLaren, L.H.

    1984-09-01

    This bibliography contains information on high-level radioactive wastes included in the Department of Energy's Energy Data Base from August 1982 through December 1983. These citations are to research reports, journal articles, books, patents, theses, and conference papers from worldwide sources. Five indexes, each preceded by a brief description, are provided: Corporate Author, Personal Author, Subject, Contract Number, and Report Number. 1452 citations

  6. PAIRWISE BLENDING OF HIGH LEVEL WASTE

    International Nuclear Information System (INIS)

    CERTA, P.J.

    2006-01-01

    The primary objective of this study is to demonstrate a mission scenario that uses pairwise and incidental blending of high level waste (HLW) to reduce the total mass of HLW glass. Secondary objectives include understanding how recent refinements to the tank waste inventory and solubility assumptions affect the mass of HLW glass and how logistical constraints may affect the efficacy of HLW blending

  7. Materials for high-level waste containment

    International Nuclear Information System (INIS)

    Marsh, G.P.

    1982-01-01

    The function of the high-level radioactive waste container in storage and of a container/overpack combination in disposal is considered. The consequent properties required from potential fabrication materials are discussed. The strategy adopted in selecting containment materials and the experimental programme underway to evaluate them are described. (U.K.)

  8. Innovative architecture design for high performance organic and hybrid multi-junction solar cells

    Science.gov (United States)

    Li, Ning; Spyropoulos, George D.; Brabec, Christoph J.

    2017-08-01

    The multi-junction concept is especially attractive for the photovoltaic (PV) research community owing to its potential to overcome the Schockley-Queisser limit of single-junction solar cells. Tremendous research interests are now focused on the development of high-performance absorbers and novel device architectures for emerging PV technologies, such as organic and perovskite PVs. It has been predicted that the multi-junction concept is able to boost the organic and perovskite PV technologies approaching the 20% and 30% benchmarks, respectively, showing a bright future of commercialization of the emerging PV technologies. In this contribution, we will demonstrate innovative architecture design for solution-processed, highly functional organic and hybrid multi-junction solar cells. A simple but elegant approach to fabricating organic and hybrid multi-junction solar cells will be introduced. By laminating single organic/hybrid solar cells together through an intermediate layer, the manufacturing cost and complexity of large-scale multi-junction solar cells can be significantly reduced. This smart approach to balancing the photocurrents as well as open circuit voltages in multi-junction solar cells will be demonstrated and discussed in detail.

  9. Space-Filling Supercapacitor Carpets: Highly scalable fractal architecture for energy storage

    Science.gov (United States)

    Tiliakos, Athanasios; Trefilov, Alexandra M. I.; Tanasǎ, Eugenia; Balan, Adriana; Stamatin, Ioan

    2018-04-01

    Revamping ground-breaking ideas from fractal geometry, we propose an alternative micro-supercapacitor configuration realized by laser-induced graphene (LIG) foams produced via laser pyrolysis of inexpensive commercial polymers. The Space-Filling Supercapacitor Carpet (SFSC) architecture introduces the concept of nested electrodes based on the pre-fractal Peano space-filling curve, arranged in a symmetrical equilateral setup that incorporates multiple parallel capacitor cells sharing common electrodes for maximum efficiency and optimal length-to-area distribution. We elucidate on the theoretical foundations of the SFSC architecture, and we introduce innovations (high-resolution vector-mode printing) in the LIG method that allow for the realization of flexible and scalable devices based on low iterations of the Peano algorithm. SFSCs exhibit distributed capacitance properties, leading to capacitance, energy, and power ratings proportional to the number of nested electrodes (up to 4.3 mF, 0.4 μWh, and 0.2 mW for the largest tested model of low iteration using aqueous electrolytes), with competitively high energy and power densities. This can pave the road for full scalability in energy storage, reaching beyond the scale of micro-supercapacitors for incorporating into larger and more demanding applications.

  10. Molecular-level architectural design using benzothiadiazole-based polymers for photovoltaic applications.

    Science.gov (United States)

    Viswanathan, Vinila N; Rao, Arun D; Pandey, Upendra K; Kesavan, Arul Varman; Ramamurthy, Praveen C

    2017-01-01

    A series of low band gap, planar conjugated polymers, P1 (PFDTBT), P2 (PFDTDFBT) and P3 (PFDTTBT), based on fluorene and benzothiadiazole, was synthesized. The effect of fluorine substitution and fused aromatic spacers on the optoelectronic and photovoltaic performance was studied. The polymer, derived from dithienylated benzothiodiazole and fluorene, P1 , exhibited a highest occupied molecular orbital (HOMO) energy level at -5.48 eV. Density functional theory (DFT) studies as well as experimental measurements suggested that upon substitution of the acceptor with fluorine, both the HOMO and lowest unoccupied molecular orbital (LUMO) energy levels of the resulting polymer, P2 , were lowered, leading to a higher open circuit voltage and short circuit current with an overall improvement of more than 110% for the photovoltaic devices. Moreover, a decrease in the torsion angle between the units was also observed for the fluorinated polymer P2 due to the enhanced electrostatic interaction between the fluorine substituents and sulfur atoms, leading to a high hole mobility. The use of a fused π-bridge in polymer P3 for the enhancement of the planarity as compared to the P1 backbone was also studied. This enhanced planarity led to the highest observed mobility among the reported three polymers as well as to an improvement in the device efficiency by more than 40% for P3 .

  11. Molecular-level architectural design using benzothiadiazole-based polymers for photovoltaic applications

    Science.gov (United States)

    Viswanathan, Vinila N; Rao, Arun D; Pandey, Upendra K; Kesavan, Arul Varman

    2017-01-01

    A series of low band gap, planar conjugated polymers, P1 (PFDTBT), P2 (PFDTDFBT) and P3 (PFDTTBT), based on fluorene and benzothiadiazole, was synthesized. The effect of fluorine substitution and fused aromatic spacers on the optoelectronic and photovoltaic performance was studied. The polymer, derived from dithienylated benzothiodiazole and fluorene, P1, exhibited a highest occupied molecular orbital (HOMO) energy level at −5.48 eV. Density functional theory (DFT) studies as well as experimental measurements suggested that upon substitution of the acceptor with fluorine, both the HOMO and lowest unoccupied molecular orbital (LUMO) energy levels of the resulting polymer, P2, were lowered, leading to a higher open circuit voltage and short circuit current with an overall improvement of more than 110% for the photovoltaic devices. Moreover, a decrease in the torsion angle between the units was also observed for the fluorinated polymer P2 due to the enhanced electrostatic interaction between the fluorine substituents and sulfur atoms, leading to a high hole mobility. The use of a fused π-bridge in polymer P3 for the enhancement of the planarity as compared to the P1 backbone was also studied. This enhanced planarity led to the highest observed mobility among the reported three polymers as well as to an improvement in the device efficiency by more than 40% for P3. PMID:28546844

  12. Full integrated system of real-time monitoring based on distributed architecture for the high temperature engineering test reactor (HTTR)

    International Nuclear Information System (INIS)

    Subekti, Muhammad; Ohno, Tomio; Kudo, Kazuhiko; Takamatsu, Kuniyoshi; Nabeshima, Kunihiko

    2005-01-01

    A new monitoring system scheme based on distributed architecture for the High Temperature Engineering Test Reactor (HTTR) is proposed to assure consistency of the real-time process of expanded system. A distributed monitoring task on client PCs as an alternative architecture maximizes the throughput and capabilities of the system even if the monitoring tasks suffer a shortage of bandwidth. The prototype of the on-line monitoring system has been developed successfully and will be tested at the actual HTTR site. (author)

  13. A non-destructive crossbar architecture of multi-level memory-based resistor

    Science.gov (United States)

    Sahebkarkhorasani, Seyedmorteza

    Nowadays, researchers are trying to shrink the memory cell in order to increase the capacity of the memory system and reduce the hardware costs. In recent years, there has been a revolution in electronics by using fundamentals of physics to build a new memory for computer application in order to increase the capacity and decrease the power consumption. Increasing the capacity of the memory causes a growth in the chip area. From 1971 to 2012 semiconductor manufacturing process improved from 6mum to 22 mum. In May 2008, S.Williams stated that "it is time to stop shrinking". In his paper, he declared that the process of shrinking memory element has recently become very slow and it is time to use another alternative in order to create memory elements [9]. In this project, we present a new design of a memory array using the new element named Memristor [3]. Memristor is a two-terminal passive electrical element that relates the charge and magnetic flux to each other. The device remained unknown since 1971 when it was discovered by Chua and introduced as the fourth fundamental passive element like capacitor, inductor and resistor [3]. Memristor has a dynamic resistance and it can retain its previous value even after disconnecting the power supply. Due to this interesting behavior of the Memristor, it can be a good replacement for all of the Non-Volatile Memories (NVMs) in the near future. Combination of this newly introduced element with the nanowire crossbar architecture would be a great structure which is called Crossbar Memristor. Some frameworks have recently been introduced in literature that utilized Memristor crossbar array, but there are many challenges to implement the Memristor crossbar array due to fabrication and device limitations. In this work, we proposed a simple design of Memristor crossbar array architecture which uses input feedback in order to preserve its data after each read operation.

  14. A proposed scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC detectors

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; Vanberg, R.

    1990-01-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequence, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a proposed new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of Gigabytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the proposed Scalable Parallel Open Architecture data acquisition system are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build a prototype of the proposed data acquisition system architecture is given in the paper. The major component of the system, a self-routing parallel event builder, is described in detail

  15. Low Power Design with High-Level Power Estimation and Power-Aware Synthesis

    CERN Document Server

    Ahuja, Sumit; Shukla, Sandeep Kumar

    2012-01-01

    Low-power ASIC/FPGA based designs are important due to the need for extended battery life, reduced form factor, and lower packaging and cooling costs for electronic devices. These products require fast turnaround time because of the increasing demand for handheld electronic devices such as cell-phones, PDAs and high performance machines for data centers. To achieve short time to market, design flows must facilitate a much shortened time-to-product requirement. High-level modeling, architectural exploration and direct synthesis of design from high level description enable this design process. This book presents novel research techniques, algorithms,methodologies and experimental results for high level power estimation and power aware high-level synthesis. Readers will learn to apply such techniques to enable design flows resulting in shorter time to market and successful low power ASIC/FPGA design. Integrates power estimation and reduction for high level synthesis, with low-power, high-level design; Shows spec...

  16. High spin levels in 151Ho

    International Nuclear Information System (INIS)

    Gizon, J.; Gizon, A.; Andre, S.; Genevey, J.; Jastrzebski, J.; Kossakowski, R.; Moszinski, M.; Preibisz, Z.

    1981-02-01

    We report here on the first study of the level structure of 151 Ho. High spin levels in 151 Ho have been populated in the 141 Pr + 16 O and 144 Sm + 12 C reactions. The level structure has been established up to 6.6 MeV energy and the spins and particles determined up to 49/2 - . Most of the proposed level configurations can be explained by the coupling of hsub(11/2) protons to fsub(7/2) and/or hsub(9/2) neutrons. An isomer with 14 +- 3 ns half-life and a delayed gamma multiplicity equal to 17 +- 2 has been found. Its spin is larger than 57/2 h units

  17. SEMICONDUCTOR INTEGRATED CIRCUITS: A high performance 90 nm CMOS SAR ADC with hybrid architecture

    Science.gov (United States)

    Xingyuan, Tong; Jianming, Chen; Zhangming, Zhu; Yintang, Yang

    2010-01-01

    A 10-bit 2.5 MS/s SAR A/D converter is presented. In the circuit design, an R-C hybrid architecture D/A converter, pseudo-differential comparison architecture and low power voltage level shifters are utilized. Design challenges and considerations are also discussed. In the layout design, each unit resistor is sided by dummies for good matching performance, and the capacitors are routed with a common-central symmetry method to reduce the nonlin-earity error. This proposed converter is implemented based on 90 nm CMOS logic process. With a 3.3 V analog supply and a 1.0 V digital supply, the differential and integral nonlinearity are measured to be less than 0.36 LSB and 0.69 LSB respectively. With an input frequency of 1.2 MHz at 2.5 MS/s sampling rate, the SFDR and ENOB are measured to be 72.86 dB and 9.43 bits respectively, and the power dissipation is measured to be 6.62 mW including the output drivers. This SAR A/D converter occupies an area of 238 × 214 μm2. The design results of this converter show that it is suitable for multi-supply embedded SoC applications.

  18. A high performance 90 nm CMOS SAR ADC with hybrid architecture

    International Nuclear Information System (INIS)

    Tong Xingyuan; Zhu Zhangming; Yang Yintang; Chen Jianming

    2010-01-01

    A 10-bit 2.5 MS/s SAR A/D converter is presented. In the circuit design, an R-C hybrid architecture D/A converter, pseudo-differential comparison architecture and low power voltage level shifters are utilized. Design challenges and considerations are also discussed. In the layout design, each unit resistor is sided by dummies for good matching performance, and the capacitors are routed with a common-central symmetry method to reduce the nonlin-earity error. This proposed converter is implemented based on 90 nm CMOS logic process. With a 3.3 V analog supply and a 1.0 V digital supply, the differential and integral nonlinearity are measured to be less than 0.36 LSB and 0.69 LSB respectively. With an input frequency of 1.2 MHz at 2.5 MS/s sampling rate, the SFDR and ENOB are measured to be 72.86 dB and 9.43 bits respectively, and the power dissipation is measured to be 6.62 mW including the output drivers. This SAR A/D converter occupies an area of 238 x 214 μm 2 . The design results of this converter show that it is suitable for multi-supply embedded SoC applications. (semiconductor integrated circuits)

  19. Disposal of high-level radioactive waste

    International Nuclear Information System (INIS)

    Glasby, G.P.

    1977-01-01

    Although controversy surrounding the possible introduction of nuclear power into New Zealand has raised many points including radiation hazards, reactor safety, capital costs, sources of uranium and earthquake risks on the one hand versus energy conservation and alternative sources of energy on the other, one problem remains paramount and is of global significance - the storage and dumping of the high-level radioactive wastes of the reactor core. The generation of abundant supplies of energy now in return for the storage of these long-lived highly radioactive wastes has been dubbed the so-called Faustian bargain. This article discusses the growth of the nuclear industry and its implications to high-level waste disposal particularly in the deep-sea bed. (auth.)

  20. High Performance Flexible Pseudocapacitor based on Nano-architectured Spinel Nickel Cobaltite Anchored Multiwall Carbon Nanotubes

    International Nuclear Information System (INIS)

    Shakir, Imran

    2014-01-01

    Highlights: • Two-step fabrication method for nano-architectured spinel nickel cobaltite (NiCo 2 O 4 ) anchored MWCNTs composite. • High performance flexible energy-storage devices. • The NiCo 2 O 4 anchored MWCNTs Exhibits 2032 Fg −1 capacitance which is 1.62 times greater than pristine NiCo 2 O 4 at 1 Ag −1 . - Abstract: We demonstrate a facile two-step fabrication method for nano-architectured spinel nickel cobaltite (NiCo 2 O 4 ) anchored multiwall carbon nanotubes (MWCNTs) based electrodes for high performance flexible energy-storage devices. As electrode materials for flexible supercapacitors, the NiCo 2 O 4 anchored MWCNTs exhibits a high specific capacitance of 2032 Fg −1 , which is nearly 1.62 times greater than pristine NiCo 2 O 4 nanoflakes at 1 Ag −1 . The synthesized NiCo 2 O 4 anchored MWCNTs composite shows excellent rate performance (83.96% capacity retention at 30 Ag −1 ) and stability with coulombic efficiency over 96% after 5,000 cycles when being fully charged/discharged at 1 Ag −1 . Furthermore, NiCo 2 O 4 anchored MWCNTs achieve a maximum energy density of 48.32 Whkg −1 at a power density of 480 Wkg −1 which is 60% higher than pristine NiCo 2 O 4 electrode and significantly outperformed electrode materials based on NiCo 2 O 4 which are currently used in the state-of-the-art supercapacitors throughout the literature. This superior rate performance and high-capacity value offered by NiCo 2 O 4 anchored MWCNTs is mainly due to enhanced electronic and ionic conductivity, which provides a short diffusion path for ions and an easy access of electrolyte flow to nickel cobaltite redox centers besides the high conductivity of MWCNTs

  1. Python based high-level synthesis compiler

    Science.gov (United States)

    Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard

    2014-11-01

    This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.

  2. The CMS High-Level Trigger

    International Nuclear Information System (INIS)

    Covarelli, R.

    2009-01-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the 'High-Level Trigger'(HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, τ leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  3. The CMS High-Level Trigger

    CERN Document Server

    Covarelli, Roberto

    2009-01-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the "High-Level Trigger" (HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, tau leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  4. The CMS High-Level Trigger

    Science.gov (United States)

    Covarelli, R.

    2009-12-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the "High-Level Trigger" (HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, τ leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  5. High-level waste processing and disposal

    International Nuclear Information System (INIS)

    Crandall, J.L.; Krause, H.; Sombret, C.; Uematsu, K.

    1984-01-01

    The national high-level waste disposal plans for France, the Federal Republic of Germany, Japan, and the United States are covered. Three conclusions are reached. The first conclusion is that an excellent technology already exists for high-level waste disposal. With appropriate packaging, spent fuel seems to be an acceptable waste form. Borosilicate glass reprocessing waste forms are well understood, in production in France, and scheduled for production in the next few years in a number of other countries. For final disposal, a number of candidate geological repository sites have been identified and several demonstration sites opened. The second conclusion is that adequate financing and a legal basis for waste disposal are in place in most countries. Costs of high-level waste disposal will probably add about 5 to 10% to the costs of nuclear electric power. The third conclusion is less optimistic. Political problems remain formidable in highly conservative regulations, in qualifying a final disposal site, and in securing acceptable transport routes

  6. Frontier: High Performance Database Access Using Standard Web Components in a Scalable Multi-Tier Architecture

    International Nuclear Information System (INIS)

    Kosyakov, S.; Kowalkowski, J.; Litvintsev, D.; Lueking, L.; Paterno, M.; White, S.P.; Autio, Lauri; Blumenfeld, B.; Maksimovic, P.; Mathis, M.

    2004-01-01

    A high performance system has been assembled using standard web components to deliver database information to a large number of broadly distributed clients. The CDF Experiment at Fermilab is establishing processing centers around the world imposing a high demand on their database repository. For delivering read-only data, such as calibrations, trigger information, and run conditions data, we have abstracted the interface that clients use to retrieve data objects. A middle tier is deployed that translates client requests into database specific queries and returns the data to the client as XML datagrams. The database connection management, request translation, and data encoding are accomplished in servlets running under Tomcat. Squid Proxy caching layers are deployed near the Tomcat servers, as well as close to the clients, to significantly reduce the load on the database and provide a scalable deployment model. Details the system's construction and use are presented, including its architecture, design, interfaces, administration, performance measurements, and deployment plan

  7. Ultra-High Density Holographic Memory Module with Solid-State Architecture

    Science.gov (United States)

    Markov, Vladimir B.

    2000-01-01

    NASA's terrestrial. space, and deep-space missions require technology that allows storing. retrieving, and processing a large volume of information. Holographic memory offers high-density data storage with parallel access and high throughput. Several methods exist for data multiplexing based on the fundamental principles of volume hologram selectivity. We recently demonstrated that a spatial (amplitude-phase) encoding of the reference wave (SERW) looks promising as a way to increase the storage density. The SERW hologram offers a method other than traditional methods of selectivity, such as spatial de-correlation between recorded and reconstruction fields, In this report we present the experimental results of the SERW-hologram memory module with solid-state architecture, which is of particular interest for space operations.

  8. Penning traps with unitary architecture for storage of highly charged ions

    International Nuclear Information System (INIS)

    Tan, Joseph N.; Guise, Nicholas D.; Brewer, Samuel M.

    2012-01-01

    Penning traps are made extremely compact by embedding rare-earth permanent magnets in the electrode structure. Axially-oriented NdFeB magnets are used in unitary architectures that couple the electric and magnetic components into an integrated structure. We have constructed a two-magnet Penning trap with radial access to enable the use of laser or atomic beams, as well as the collection of light. An experimental apparatus equipped with ion optics is installed at the NIST electron beam ion trap (EBIT) facility, constrained to fit within 1 meter at the end of a horizontal beamline for transporting highly charged ions. Highly charged ions of neon and argon, extracted with initial energies up to 4000 eV per unit charge, are captured and stored to study the confinement properties of a one-magnet trap and a two-magnet trap. Design considerations and some test results are discussed.

  9. Penning traps with unitary architecture for storage of highly charged ions.

    Science.gov (United States)

    Tan, Joseph N; Brewer, Samuel M; Guise, Nicholas D

    2012-02-01

    Penning traps are made extremely compact by embedding rare-earth permanent magnets in the electrode structure. Axially-oriented NdFeB magnets are used in unitary architectures that couple the electric and magnetic components into an integrated structure. We have constructed a two-magnet Penning trap with radial access to enable the use of laser or atomic beams, as well as the collection of light. An experimental apparatus equipped with ion optics is installed at the NIST electron beam ion trap (EBIT) facility, constrained to fit within 1 meter at the end of a horizontal beamline for transporting highly charged ions. Highly charged ions of neon and argon, extracted with initial energies up to 4000 eV per unit charge, are captured and stored to study the confinement properties of a one-magnet trap and a two-magnet trap. Design considerations and some test results are discussed.

  10. The CMS High Level Trigger System: Experience and Future Development

    CERN Document Server

    Bauer, Gerry; Bowen, Matthew; Branson, James G; Bukowiec, Sebastian; Cittolin, Sergio; Coarasa, J A; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Flossdorf, Alexander; Gigi, Dominique; Glege, Frank; Gomez-Reino, R; Hartl, Christian; Hegeman, Jeroen; Holzner, André; Y L Hwong; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Mommsen, R K; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Pieri, Marco; Polese, Giovanni; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schwick, Christoph; Shpakov, Dennis; Simon, M; Spataru, A C; Sumorok, Konstanty

    2012-01-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  11. DUACS: Toward High Resolution Sea Level Products

    Science.gov (United States)

    Faugere, Y.; Gerald, D.; Ubelmann, C.; Claire, D.; Pujol, M. I.; Antoine, D.; Desjonqueres, J. D.; Picot, N.

    2016-12-01

    The DUACS system produces, as part of the CNES/SALP project, and the Copernicus Marine Environment and Monitoring Service, high quality multimission altimetry Sea Level products for oceanographic applications, climate forecasting centers, geophysic and biology communities... These products consist in directly usable and easy to manipulate Level 3 (along-track cross-calibrated SLA) and Level 4 products (multiple sensors merged as maps or time series) and are available in global and regional version (Mediterranean Sea, Arctic, European Shelves …).The quality of the products is today limited by the altimeter technology "Low Resolution Mode" (LRM), and the lack of available observations. The launch of 2 new satellites in 2016, Jason-3 and Sentinel-3A, opens new perspectives. Using the global Synthetic Aperture Radar mode (SARM) coverage of S3A and optimizing the LRM altimeter processing (retracking, editing, ...) will allow us to fully exploit the fine-scale content of the altimetric missions. Thanks to this increase of real time altimetry observations we will also be able to improve Level-4 products by combining these new Level-3 products and new mapping methodology, such as dynamic interpolation. Finally these improvements will benefit to downstream products : geostrophic currents, Lagrangian products, eddy atlas… Overcoming all these challenges will provide major upgrades of Sea Level products to better fulfill user needs.

  12. Highly efficient phosphorescent blue and white organic light-emitting devices with simplified architectures

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chih-Hao, E-mail: chc@saturn.yzu.edu.tw [Department of Photonics Engineering, Yuan Ze University, Chung-Li, Taiwan 32003 (China); Ding, Yong-Shung; Hsieh, Po-Wei; Chang, Chien-Ping; Lin, Wei-Chieh [Department of Photonics Engineering, Yuan Ze University, Chung-Li, Taiwan 32003 (China); Chang, Hsin-Hua, E-mail: hhua3@mail.vnu.edu.tw [Department of Electro-Optical Engineering, Vanung University, Chung-Li, Taiwan 32061 (China)

    2011-09-01

    Blue phosphorescent organic light-emitting devices (PhOLEDs) with quantum efficiency close to the theoretical maximum were achieved by utilizing a double-layer architecture. Two wide-triplet-gap materials, 1,3-bis(9-carbazolyl)benzene and 1,3,5-tri[(3-pyridyl)-phen-3-yl]benzene, were employed in the emitting and electron-transport layers respectively. The opposite carrier-transport characteristics of these two materials were leveraged to define the exciton formation zone and thus increase the probability of recombination. The efficiency at practical luminance (100 cd/m{sup 2}) was as high as 20.8%, 47.7 cd/A and 31.2 lm/W, respectively. Furthermore, based on the design concept of this simplified architecture, efficient warmish-white PhOLEDs were developed. Such two-component white organic light-emitting devices exhibited rather stable colors over a wide brightness range and yielded electroluminescence efficiencies of 15.3%, 33.3 cd/A, and 22.7 lm/W in the forward directions.

  13. Highly Conductive 3D Segregated Graphene Architecture in Polypropylene Composite with Efficient EMI Shielding

    Directory of Open Access Journals (Sweden)

    Fakhr E. Alam

    2017-12-01

    Full Text Available The extensive use of electronic equipment in modern life causes potential electromagnetic pollution harmful to human health. Therefore, it is of great significance to enhance the electrical conductivity of polymers, which are widely used in electronic components, to screen out electromagnetic waves. The fabrication of graphene/polymer composites has attracted much attention in recent years due to the excellent electrical properties of graphene. However, the uniform distribution of graphene nanoplatelets (GNPs in a non-polar polymer matrix like polypropylene (PP still remains a challenge, resulting in the limited improvement of electrical conductivity of PP-based composites achieved to date. Here, we propose a single-step approach to prepare GNPs/PP composites embedded with a segregated architecture of GNPs by coating PP particles with GNPs, followed by hot-pressing. As a result, the electrical conductivity of 10 wt % GNPs-loaded composites reaches 10.86 S·cm−1, which is ≈7 times higher than that of the composites made by the melt-blending process. Accordingly, a high electromagnetic interference shielding effectiveness (EMI SE of 19.3 dB can be achieved. Our method is green, low-cost, and scalable to develop 3D GNPs architecture in a polymer matrix, providing a versatile composite material suitable for use in electronics, aerospace, and automotive industries.

  14. Highly efficient phosphorescent blue and white organic light-emitting devices with simplified architectures

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Ding, Yong-Shung; Hsieh, Po-Wei; Chang, Chien-Ping; Lin, Wei-Chieh; Chang, Hsin-Hua

    2011-01-01

    Blue phosphorescent organic light-emitting devices (PhOLEDs) with quantum efficiency close to the theoretical maximum were achieved by utilizing a double-layer architecture. Two wide-triplet-gap materials, 1,3-bis(9-carbazolyl)benzene and 1,3,5-tri[(3-pyridyl)-phen-3-yl]benzene, were employed in the emitting and electron-transport layers respectively. The opposite carrier-transport characteristics of these two materials were leveraged to define the exciton formation zone and thus increase the probability of recombination. The efficiency at practical luminance (100 cd/m 2 ) was as high as 20.8%, 47.7 cd/A and 31.2 lm/W, respectively. Furthermore, based on the design concept of this simplified architecture, efficient warmish-white PhOLEDs were developed. Such two-component white organic light-emitting devices exhibited rather stable colors over a wide brightness range and yielded electroluminescence efficiencies of 15.3%, 33.3 cd/A, and 22.7 lm/W in the forward directions.

  15. Unprecedented high-resolution view of bacterial operon architecture revealed by RNA sequencing.

    Science.gov (United States)

    Conway, Tyrrell; Creecy, James P; Maddox, Scott M; Grissom, Joe E; Conkle, Trevor L; Shadid, Tyler M; Teramoto, Jun; San Miguel, Phillip; Shimada, Tomohiro; Ishihama, Akira; Mori, Hirotada; Wanner, Barry L

    2014-07-08

    We analyzed the transcriptome of Escherichia coli K-12 by strand-specific RNA sequencing at single-nucleotide resolution during steady-state (logarithmic-phase) growth and upon entry into stationary phase in glucose minimal medium. To generate high-resolution transcriptome maps, we developed an organizational schema which showed that in practice only three features are required to define operon architecture: the promoter, terminator, and deep RNA sequence read coverage. We precisely annotated 2,122 promoters and 1,774 terminators, defining 1,510 operons with an average of 1.98 genes per operon. Our analyses revealed an unprecedented view of E. coli operon architecture. A large proportion (36%) of operons are complex with internal promoters or terminators that generate multiple transcription units. For 43% of operons, we observed differential expression of polycistronic genes, despite being in the same operons, indicating that E. coli operon architecture allows fine-tuning of gene expression. We found that 276 of 370 convergent operons terminate inefficiently, generating complementary 3' transcript ends which overlap on average by 286 nucleotides, and 136 of 388 divergent operons have promoters arranged such that their 5' ends overlap on average by 168 nucleotides. We found 89 antisense transcripts of 397-nucleotide average length, 7 unannotated transcripts within intergenic regions, and 18 sense transcripts that completely overlap operons on the opposite strand. Of 519 overlapping transcripts, 75% correspond to sequences that are highly conserved in E. coli (>50 genomes). Our data extend recent studies showing unexpected transcriptome complexity in several bacteria and suggest that antisense RNA regulation is widespread. Importance: We precisely mapped the 5' and 3' ends of RNA transcripts across the E. coli K-12 genome by using a single-nucleotide analytical approach. Our resulting high-resolution transcriptome maps show that ca. one-third of E. coli operons are

  16. Assessment of studies and researches on warehousing - High-level and intermediate-level-long-lived radioactive wastes - December 2012

    International Nuclear Information System (INIS)

    2013-01-01

    This large report first presents the approach adopted for the study and research on the warehousing of high-level and intermediate-level-long-lived radioactive wastes. It outlines how reversible storage and warehousing are complementary, discusses the lessons learned from researches performed by the CEA on long duration warehousing, presents the framework of studies and researches performed since 2006, and presents the scientific and technical content of studies and researches (warehousing need analysis, search for technical options providing complementarity with storage, extension or creation of warehousing installations). The second part addresses high-level and intermediate-level-long-lived radioactive waste parcels, indicates their origins and quantities. The third part proposes an analysis of warehousing capacities: existing capacities, French industrial experience in waste parcel warehousing, foreign experience in waste warehousing. The fourth part addresses reversible storage in deep geological formation: storage safety functions, storage reversibility, storage parcels, storage architecture, chronicle draft. The fifth part proposes an inventory of warehousing needs in terms of additional capacities for the both types of wastes (high-level, and intermediate-level-long-lived), and discusses warehousing functionalities and safety objectives. The sixth and seventh parts propose a detailed overview of design options for warehousing installations, respectively for high-level and for intermediate-level-long-lived waste parcels: main technical issues, feasibility studies of different concepts or architecture shapes, results of previous studies and introduction to studies performed since 2011, possible evolutions of the HA1, HA2 and MAVL concepts. The eighth chapter reports a phenomenological analysis of warehousing and the optimisation of material selection and construction arrangements. The last part discusses the application of researches to the extension of the

  17. Cermets for high level waste containment

    International Nuclear Information System (INIS)

    Aaron, W.S.; Quinby, T.C.; Kobisk, E.H.

    1978-01-01

    Cermet materials are currently under investigation as an alternate for the primary containment of high level wastes. The cermet in this study is an iron--nickel base metal matrix containing uniformly dispersed, micron-size fission product oxides, aluminosilicates, and titanates. Cermets possess high thermal conductivity, and typical waste loading of 70 wt % with volume reduction factors of 2 to 200 and low processing volatility losses have been realized. Preliminary leach studies indicate a leach resistance comparable to other candidate waste forms; however, more quantitative data are required. Actual waste studies have begun on NFS Acid Thorex, SRP dried sludge and fresh, unneutralized SRP process wastes

  18. Timing of High-level Waste Disposal

    International Nuclear Information System (INIS)

    2008-01-01

    This study identifies key factors influencing the timing of high-level waste (HLW) disposal and examines how social acceptability, technical soundness, environmental responsibility and economic feasibility impact on national strategies for HLW management and disposal. Based on case study analyses, it also presents the strategic approaches adopted in a number of national policies to address public concerns and civil society requirements regarding long-term stewardship of high-level radioactive waste. The findings and conclusions of the study confirm the importance of informing all stakeholders and involving them in the decision-making process in order to implement HLW disposal strategies successfully. This study will be of considerable interest to nuclear energy policy makers and analysts as well as to experts in the area of radioactive waste management and disposal. (author)

  19. High-Level Waste Melter Study Report

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Joseph M.; Bickford, Dennis F.; Day, Delbert E.; Kim, Dong-Sang; Lambert, Steven L.; Marra, Sharon L.; Peeler, David K.; Strachan, Denis M.; Triplett, Mark B.; Vienna, John D.; Wittman, Richard S.

    2001-07-13

    At the Hanford Site in Richland, Washington, the path to site cleanup involves vitrification of the majority of the wastes that currently reside in large underground tanks. A Joule-heated glass melter is the equipment of choice for vitrifying the high-level fraction of these wastes. Even though this technology has general national and international acceptance, opportunities may exist to improve or change the technology to reduce the enormous cost of accomplishing the mission of site cleanup. Consequently, the U.S. Department of Energy requested the staff of the Tanks Focus Area to review immobilization technologies, waste forms, and modifications to requirements for solidification of the high-level waste fraction at Hanford to determine what aspects could affect cost reductions with reasonable long-term risk. The results of this study are summarized in this report.

  20. High-level radioactive wastes. Supplement 1

    Energy Technology Data Exchange (ETDEWEB)

    McLaren, L.H. (ed.)

    1984-09-01

    This bibliography contains information on high-level radioactive wastes included in the Department of Energy's Energy Data Base from August 1982 through December 1983. These citations are to research reports, journal articles, books, patents, theses, and conference papers from worldwide sources. Five indexes, each preceded by a brief description, are provided: Corporate Author, Personal Author, Subject, Contract Number, and Report Number. 1452 citations.

  1. Decommissioning high-level waste surface facilities

    International Nuclear Information System (INIS)

    1978-04-01

    The protective storage, entombment and dismantlement options of decommissioning a High-Level Waste Surface Facility (HLWSF) was investigated. A reference conceptual design for the facility was developed based on the designs of similar facilities. State-of-the-art decommissioning technologies were identified. Program plans and cost estimates for decommissioning the reference conceptual designs were developed. Good engineering design concepts were on the basis of this work identified

  2. The ALICE Dimuon Spectrometer High Level Trigger

    CERN Document Server

    Becker, B; Cicalo, Corrado; Das, Indranil; de Vaux, Gareth; Fearick, Roger; Lindenstruth, Volker; Marras, Davide; Sanyal, Abhijit; Siddhanta, Sabyasachi; Staley, Florent; Steinbeck, Timm; Szostak, Artur; Usai, Gianluca; Vilakazi, Zeblon

    2009-01-01

    The ALICE Dimuon Spectrometer High Level Trigger (dHLT) is an on-line processing stage whose primary function is to select interesting events that contain distinct physics signals from heavy resonance decays such as J/psi and Gamma particles, amidst unwanted background events. It forms part of the High Level Trigger of the ALICE experiment, whose goal is to reduce the large data rate of about 25 GB/s from the ALICE detectors by an order of magnitude, without loosing interesting physics events. The dHLT has been implemented as a software trigger within a high performance and fault tolerant data transportation framework, which is run on a large cluster of commodity compute nodes. To reach the required processing speeds, the system is built as a concurrent system with a hierarchy of processing steps. The main algorithms perform partial event reconstruction, starting with hit reconstruction on the level of the raw data received from the spectrometer. Then a tracking algorithm finds track candidates from the recon...

  3. CVISN operational and architectural compatibility handbook (COACH). Part 1, Operational concept and top-level design checklists

    Science.gov (United States)

    1999-04-22

    The CVISN Operational and Architectural Compatibility Handbook (COACH) provides a comprehensive checklist of what is required to conform with the Commercial Vehicle Information Systems and Networks (CVISN) operational concepts and architecture. It is...

  4. Performance of the CMS High Level Trigger

    CERN Document Server

    Perrotta, Andrea

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved trac...

  5. New Architecture of Optical Interconnect for High-Speed Optical Computerized Data Networks (Nonlinear Response

    Directory of Open Access Journals (Sweden)

    El-Sayed A. El-Badawy

    2008-02-01

    Full Text Available Although research into the use of optics in computers has increased in the last and current decades, the fact remains that electronics is still superior to optics in almost every way. Research into the use of optics at this stage mirrors the research into electronics after the 2nd World War. The advantages of using fiber optics over wiring are the same as the argument for using optics over electronics in computers. Even through totally optical computers are now a reality, computers that combine both electronics and optics, electro-optic hybrids, have been in use for some time. In the present paper, architecture of optical interconnect is built up on the bases of four Vertical-Cavity Surface- Emitting Laser Diodes (VCSELD and two optical links where thermal effects of both the diodes and the links are included. Nonlinear relations are correlated to investigate the power-current and the voltage-current dependences of the four devices. The good performance (high speed of the interconnect is deeply and parametrically investigated under wide ranges of the affecting parameters. The high speed performance is processed through three different effects, namely the device 3-dB bandwidth, the link dispersion characteristics, and the transmitted bit rate (soliton. Eight combinations are investigated; each possesses its own characteristics. The best architecture is the one composed of VCSELD that operates at 850 nm and the silica fiber whatever the operating set of causes. This combination possesses the largest device 3-dB bandwidth, the largest link bandwidth and the largest soliton transmitted bit rate. The increase of the ambient temperature reduces the high-speed performance of the interconnect

  6. Accuracy Test of Software Architecture Compliance Checking Tools : Test Instruction

    NARCIS (Netherlands)

    Prof.dr. S. Brinkkemper; Dr. Leo Pruijt; C. Köppe; J.M.E.M. van der Werf

    2015-01-01

    Author supplied: "Abstract Software Architecture Compliance Checking (SACC) is an approach to verify conformance of implemented program code to high-level models of architectural design. Static SACC focuses on the modular software architecture and on the existence of rule violating dependencies

  7. The high level vibration test program

    International Nuclear Information System (INIS)

    Hofmayer, C.H.; Curreri, J.R.; Park, Y.J.; Kato, W.Y.; Kawakami, S.

    1989-01-01

    As part of cooperative agreements between the US and Japan, tests have been performed on the seismic vibration table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center (NUPEC) in Japan. The objective of the test program was to use the NUPEC vibration table to drive large diameter nuclear power piping to substantial plastic strain with an earthquake excitation and to compare the results with state-of-the-art analysis of the problem. The test model was subjected to a maximum acceleration well beyond what nuclear power plants are designed to withstand. A modified earthquake excitation was applied and the excitation level was increased carefully to minimize the cumulative fatigue damage due to the intermediate level excitations. Since the piping was pressurized, and the high level earthquake excitation was repeated several times, it was possible to investigate the effects of ratchetting and fatigue as well. Elastic and inelastic seismic response behavior of the test model was measured in a number of test runs with an increasing excitation input level up to the limit of the vibration table. In the maximum input condition, large dynamic plastic strains were obtained in the piping. Crack initiation was detected following the second maximum excitation run. Crack growth was carefully monitored during the next two additional maximum excitation runs. The final test resulted in a maximum crack depth of approximately 94% of the wall thickness. The HLVT (high level vibration test) program has enhanced understanding of the behavior of piping systems under severe earthquake loading. As in other tests to failure of piping components, it has demonstrated significant seismic margin in nuclear power plant piping

  8. Ramifications of defining high-level waste

    International Nuclear Information System (INIS)

    Wood, D.E.; Campbell, M.H.; Shupe, M.W.

    1987-01-01

    The Nuclear Regulatory Commission (NRC) is considering rule making to provide a concentration-based definition of high-level waste (HLW) under authority derived from the Nuclear Waste Policy Act (NWPA) of 1982 and the Low Level Waste Policy Amendments Act of 1985. The Department of Energy (DOE), which has the responsibility to dispose of certain kinds of commercial waste, is supporting development of a risk-based classification system by the Oak Ridge National Laboratory to assist in developing and implementing the NRC rule. The system is two dimensional, with the axes based on the phrases highly radioactive and requires permanent isolation in the definition of HLW in the NWPA. Defining HLW will reduce the ambiguity in the present source-based definition by providing concentration limits to establish which materials are to be called HLW. The system allows the possibility of greater-confinement disposal for some wastes which do not require the degree of isolation provided by a repository. The definition of HLW will provide a firm basis for waste processing options which involve partitioning of waste into a high-activity stream for repository disposal, and a low-activity stream for disposal elsewhere. Several possible classification systems have been derived and the characteristics of each are discussed. The Defense High Level Waste Technology Lead Office at DOE - Richland Operations Office, supported by Rockwell Hanford Operations, has coordinated reviews of the ORNL work by a technical peer review group and other DOE offices. The reviews produced several recommendations and identified several issues to be addressed in the NRC rule making. 10 references, 3 figures

  9. NEBULAS A High Performance Data-Driven Event-Building Architecture based on an Asynchronous Self-Routing Packet-Switching Network

    CERN Multimedia

    Costa, M; Letheren, M; Djidi, K; Gustafsson, L; Lazraq, T; Minerskjold, M; Tenhunen, H; Manabe, A; Nomachi, M; Watase, Y

    2002-01-01

    RD31 : The project is evaluating a new approach to event building for level-two and level-three processor farms at high rate experiments. It is based on the use of commercial switching fabrics to replace the traditional bus-based architectures used in most previous data acquisition sytems. Switching fabrics permit the construction of parallel, expandable, hardware-driven event builders that can deliver higher aggregate throughput than the bus-based architectures. A standard industrial switching fabric technology is being evaluated. It is based on Asynchronous Transfer Mode (ATM) packet-switching network technology. Commercial, expandable ATM switching fabrics and processor interfaces, now being developed for the future Broadband ISDN infrastructure, could form the basis of an implementation. The goals of the project are to demonstrate the viability of this approach, to evaluate the trade-offs involved in make versus buy options, to study the interfacing of the physics frontend data buffers to such a fabric, a...

  10. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John; Lee, Jon; Margulies, Susan

    2010-01-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  11. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John

    2010-06-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  12. Architecture of distributed picture archiving and communication systems for storing and processing high resolution medical images

    Directory of Open Access Journals (Sweden)

    Tokareva Victoria

    2018-01-01

    Full Text Available New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS. Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.

  13. Architecture of distributed picture archiving and communication systems for storing and processing high resolution medical images

    Science.gov (United States)

    Tokareva, Victoria

    2018-04-01

    New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.

  14. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    Science.gov (United States)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  15. A simple-architecture fibered transmission system for dissemination of high stability 100 MHz signals

    Science.gov (United States)

    Bakir, A.; Rocher, C.; Maréchal, B.; Bigler, E.; Boudot, R.; Kersalé, Y.; Millo, J.

    2018-05-01

    We report on the development of a simple-architecture fiber-based frequency distribution system used to transfer high frequency stability 100 MHz signals. This work is focused on the emitter and the receiver performances that allow the transmission of the radio-frequency signal over an optical fiber. The system exhibits a residual fractional frequency stability of 1 × 10-14 at 1 s integration time and in the low 10-16 range after 100 s. These performances are suitable to transfer the signal of frequency references such as those of a state-of-the-art hydrogen maser without any phase noise compensation scheme. As an application, we demonstrate the dissemination of such a signal through a 100 m long optical fiber without any degradation. The proposed setup could be easily extended for operating frequencies in the 10 MHz-1 GHz range.

  16. (Invited) Wavy Channel TFT Architecture for High Performance Oxide Based Displays

    KAUST Repository

    Hanna, Amir

    2015-05-22

    We show the effectiveness of wavy channel architecture for thin film transistor application for increased output current. This specific architecture allows increased width of the device by adopting a corrugated shape of the substrate without any further real estate penalty. The performance improvement is attributed not only to the increased transistor width, but also to enhanced applied electric field in the channel due to the wavy architecture.

  17. (Invited) Wavy Channel TFT Architecture for High Performance Oxide Based Displays

    KAUST Repository

    Hanna, Amir; Hussain, Aftab M.; Hussain, Aftab M.; Ghoneim, Mohamed T.; Rojas, Jhonathan Prieto; Sevilla, Galo T.; Hussain, Muhammad Mustafa

    2015-01-01

    We show the effectiveness of wavy channel architecture for thin film transistor application for increased output current. This specific architecture allows increased width of the device by adopting a corrugated shape of the substrate without any further real estate penalty. The performance improvement is attributed not only to the increased transistor width, but also to enhanced applied electric field in the channel due to the wavy architecture.

  18. Investigation of Transformer Winding Architectures for High Voltage Capacitor Charging Applications

    DEFF Research Database (Denmark)

    Schneider, Henrik; Thummala, Prasanth; Huang, Lina

    2014-01-01

    Transformer parameters such as leakage inductance and self-capacitance are rarely calculated in advance during the design phase, because of the complexity and huge analytical error margins caused by practical winding implementation issues. Thus, choosing one transformer architecture over another ...... converter used to drive a dielectric electro active polymer based incremental actuator. The total losses due to the transformer parasitics for the best transformer architectures is reduced by more than a factor of ten compared to the worst case transformer architectures....

  19. Holey Nanocarbon Architectures for High-Performance Lithium-Air Batteries

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposal is to develop 3-dimensional hierarchical mesoporous nanocarbon architecture using primarily our unique holey nanocarbon platforms...

  20. Intergenerational ethics of high level radioactive waste

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, Kunihiko [Nagoya Univ., Graduate School of Engineering, Nagoya, Aichi (Japan); Nasu, Akiko; Maruyama, Yoshihiro [Shibaura Inst. of Tech., Tokyo (Japan)

    2003-03-01

    The validity of intergenerational ethics on the geological disposal of high level radioactive waste originating from nuclear power plants was studied. The result of the study on geological disposal technology showed that the current method of disposal can be judged to be scientifically reliable for several hundred years and the radioactivity level will be less than one tenth of the tolerable amount after 1,000 years or more. This implies that the consideration of intergenerational ethics of geological disposal is meaningless. Ethics developed in western society states that the consent of people in the future is necessary if the disposal has influence on them. Moreover, the ethics depends on generally accepted ideas in western society and preconceptions based on racism and sexism. The irrationality becomes clearer by comparing the dangers of the exhaustion of natural resources and pollution from harmful substances in a recycling society. (author)

  1. Management of high level radioactive waste

    International Nuclear Information System (INIS)

    Redon, A.; Mamelle, J.; Chambon, M.

    1977-01-01

    The world wide needs in reprocessing will reach the value of 10.000 t/y of irradiated fuels, in the mid of the 80's. Several countries will have planned, in their nuclear programme, the construction of reprocessing plants with a 1500 t/y capacity, corresponding to 50.000 MWe installed. At such a level, the solidification of the radioactive waste will become imperative. For this reason, all efforts, in France, have been directed towards the realization of industrial plants able of solidifying the fission products as a glassy material. The advantages of this decision, and the reasons for it are presented. The continuing development work, and the conditions and methods of storing the high-level wastes prior to solidification, and of the interim storage (for thermal decay) and the ultimate disposal after solidification are described [fr

  2. Intergenerational ethics of high level radioactive waste

    International Nuclear Information System (INIS)

    Takeda, Kunihiko; Nasu, Akiko; Maruyama, Yoshihiro

    2003-01-01

    The validity of intergenerational ethics on the geological disposal of high level radioactive waste originating from nuclear power plants was studied. The result of the study on geological disposal technology showed that the current method of disposal can be judged to be scientifically reliable for several hundred years and the radioactivity level will be less than one tenth of the tolerable amount after 1,000 years or more. This implies that the consideration of intergenerational ethics of geological disposal is meaningless. Ethics developed in western society states that the consent of people in the future is necessary if the disposal has influence on them. Moreover, the ethics depends on generally accepted ideas in western society and preconceptions based on racism and sexism. The irrationality becomes clearer by comparing the dangers of the exhaustion of natural resources and pollution from harmful substances in a recycling society. (author)

  3. High level waste fixation in cermet form

    International Nuclear Information System (INIS)

    Kobisk, E.H.; Aaron, W.S.; Quinby, T.C.; Ramey, D.W.

    1981-01-01

    Commercial and defense high level waste fixation in cermet form is being studied by personnel of the Isotopes Research Materials Laboratory, Solid State Division (ORNL). As a corollary to earlier research and development in forming high density ceramic and cermet rods, disks, and other shapes using separated isotopes, similar chemical and physical processing methods have been applied to synthetic and real waste fixation. Generally, experimental products resulting from this approach have shown physical and chemical characteristics which are deemed suitable for long-term storage, shipping, corrosive environments, high temperature environments, high waste loading, decay heat dissipation, and radiation damage. Although leach tests are not conclusive, what little comparative data are available show cermet to withstand hydrothermal conditions in water and brine solutions. The Soxhlet leach test, using radioactive cesium as a tracer, showed that leaching of cermet was about X100 less than that of 78 to 68 glass. Using essentially uncooled, untreated waste, cermet fixation was found to accommodate up to 75% waste loading and yet, because of its high thermal conductivity, a monolith of 0.6 m diameter and 3.3 m-length would have only a maximum centerline temperature of 29 K above the ambient value

  4. Liquid level measurement in high level nuclear waste slurries

    International Nuclear Information System (INIS)

    Weeks, G.E.; Heckendorn, F.M.; Postles, R.L.

    1990-01-01

    Accurate liquid level measurement has been a difficult problem to solve for the Defense Waste Processing Facility (DWPF). The nuclear waste sludge tends to plug or degrade most commercially available liquid-level measurement sensors. A liquid-level measurement system that meets demanding accuracy requirements for the DWPF has been developed. The system uses a pneumatic 1:1 pressure repeater as a sensor and a computerized error correction system. 2 figs

  5. Instrumentation of a Level-1 Track Trigger at ATLAS with Double Buffer Front-End Architecture

    CERN Document Server

    Cooper, B; The ATLAS collaboration

    2012-01-01

    The increased collision rate and pile-up produced at the HLLHC requires a substantial upgrade of the ATLAS level-1 trigger in order to maintain a broad physics reach. We show that tracking information can be used to control trigger rates, and describe a proposal for how this information can be extracted within a two-stage level-1 trigger design that has become the baseline for the HLLHC upgrade. We demonstrate that, in terms of the communication between the external processing and the tracking detector frontends, a hardware solution is possible that fits within the latency constraints of level-1.

  6. The ARES High-level Intermediate Representation

    Energy Technology Data Exchange (ETDEWEB)

    Moss, Nicholas David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The LLVM intermediate representation (IR) lacks semantic constructs for depicting common high-performance operations such as parallel and concurrent execution, communication and synchronization. Currently, representing such semantics in LLVM requires either extending the intermediate form (a signi cant undertaking) or the use of ad hoc indirect means such as encoding them as intrinsics and/or the use of metadata constructs. In this paper we discuss a work in progress to explore the design and implementation of a new compilation stage and associated high-level intermediate form that is placed between the abstract syntax tree and when it is lowered to LLVM's IR. This highlevel representation is a superset of LLVM IR and supports the direct representation of these common parallel computing constructs along with the infrastructure for supporting analysis and transformation passes on this representation.

  7. Exposure to unusually high indoor radon levels

    International Nuclear Information System (INIS)

    Rasheed, F.N.

    1993-01-01

    Unusually high indoor radon concentrations were reported in a small village in western Tyrol, Austria. The authors have measured the seasonal course of indoor radon concentrations in 390 houses of this village. 71% of houses in winter and 33% in summer, showed radon values on the ground floor above the Austrian action level of 400 Bq/cm 3 . This proportion results in an unusually high indoor radon exposure of the population. The radon source was an 8,700-year-old rock slide of granite gneiss, the largest of the alpine crystalline rocks. It has a strong emanating power because its rocks are heavily fractured and show a slightly increased uranium content. Previous reports show increased lung cancer mortality, myeloid leukemia, kidney cancer, melanoma, and prostate cancer resulting from indoor radon exposure. However, many studies fail to provide accurate information on indoor radon concentrations, classifying them merely as low, intermediate, and high, or they record only minor increases in indoor radon concentrations. Mortality data for 1970-91 were used to calculate age and sex standardized mortality rates (SMR) for 51 sites of carcinoma. The total population of Tyrol were controls. A significantly higher risk was recorded for lung cancer. The high SMR for lung cancer in female subjects is especially striking. Because the numbers were low for the other cancer sites, these were combined in one group to calculate the SMR. No significant increase in SMR was found for this group

  8. Technetium Chemistry in High-Level Waste

    International Nuclear Information System (INIS)

    Hess, Nancy J.

    2006-01-01

    Tc contamination is found within the DOE complex at those sites whose mission involved extraction of plutonium from irradiated uranium fuel or isotopic enrichment of uranium. At the Hanford Site, chemical separations and extraction processes generated large amounts of high level and transuranic wastes that are currently stored in underground tanks. The waste from these extraction processes is currently stored in underground High Level Waste (HLW) tanks. However, the chemistry of the HLW in any given tank is greatly complicated by repeated efforts to reduce volume and recover isotopes. These processes ultimately resulted in mixing of waste streams from different processes. As a result, the chemistry and the fate of Tc in HLW tanks are not well understood. This lack of understanding has been made evident in the failed efforts to leach Tc from sludge and to remove Tc from supernatants prior to immobilization. Although recent interest in Tc chemistry has shifted from pretreatment chemistry to waste residuals, both needs are served by a fundamental understanding of Tc chemistry

  9. Processing vessel for high level radioactive wastes

    International Nuclear Information System (INIS)

    Maekawa, Hiromichi

    1998-01-01

    Upon transferring an overpack having canisters containing high level radioactive wastes sealed therein and burying it into an underground processing hole, an outer shell vessel comprising a steel plate to be fit and contained in the processing hole is formed. A bury-back layer made of dug earth and sand which had been discharged upon forming the processing hole is formed on the inner circumferential wall of the outer shell vessel. A buffer layer having a predetermined thickness is formed on the inner side of the bury-back layer, and the overpack is contained in the hollow portion surrounded by the layer. The opened upper portion of the hollow portion is covered with the buffer layer and the bury-back layer. Since the processing vessel having a shielding performance previously formed on the ground, the state of packing can be observed. In addition, since an operator can directly operates upon transportation and burying of the high level radioactive wastes, remote control is no more necessary. (T.M.)

  10. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.

    Directory of Open Access Journals (Sweden)

    Md Selim Hossain

    Full Text Available In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM, which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST. The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text] and Area × Time × Energy (ATE product of the proposed design are far better than the most significant studies found in the literature.

  11. A Strategy for Architecture Design of Crystalline Perovskite Light-Emitting Diodes with High Performance.

    Science.gov (United States)

    Shi, Yifei; Wu, Wen; Dong, Hua; Li, Guangru; Xi, Kai; Divitini, Giorgio; Ran, Chenxin; Yuan, Fang; Zhang, Min; Jiao, Bo; Hou, Xun; Wu, Zhaoxin

    2018-06-01

    All present designs of perovskite light-emitting diodes (PeLEDs) stem from polymer light-emitting diodes (PLEDs) or perovskite solar cells. The optimal structure of PeLEDs can be predicted to differ from PLEDs due to the different fluorescence dynamics and crystallization between perovskite and polymer. Herein, a new design strategy and conception is introduced, "insulator-perovskite-insulator" (IPI) architecture tailored to PeLEDs. As examples of FAPbBr 3 and MAPbBr 3 , it is experimentally shown that the IPI structure effectively induces charge carriers into perovskite crystals, blocks leakage currents via pinholes in the perovskite film, and avoids exciton quenching simultaneously. Consequently, as for FAPbBr 3 , a 30-fold enhancement in the current efficiency of IPI-structured PeLEDs compared to a control device with poly(3,4ethylenedioxythiophene):poly(styrene sulfonate) as hole-injection layer is achieved-from 0.64 to 20.3 cd A -1 -while the external quantum efficiency is increased from 0.174% to 5.53%. As the example of CsPbBr 3 , compared with the control device, both current efficiency and lifetime of IPI-structured PeLEDs are improved from 1.42 and 4 h to 9.86 cd A -1 and 96 h. This IPI architecture represents a novel strategy for the design of light-emitting didoes based on various perovskites with high efficiencies and stabilities. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Tracking at High Level Trigger in CMS

    CERN Document Server

    Tosi, Mia

    2016-01-01

    The trigger systems of the LHC detectors play a crucial role in determining the physics capabili- ties of the experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a stream- lined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable out- put rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and ...

  13. Selecting a High-Quality Central Model for Sharing Architectural Knowledge

    NARCIS (Netherlands)

    Liang, Peng; Jansen, Anton; Avgeriou, Paris; Zhu, H

    2008-01-01

    In the field of software architecture, there has been a paradigm shift front describing the outcome of architecting process to documenting Architectural Knowledge (AK), such as design decisions and rationale. To this end, a series of domain models have been proposed for defining the concepts and

  14. MOVE-Pro: a low power and high code density TTA architecture

    NARCIS (Netherlands)

    He, Y.; She, D.; Mesman, B.; Corporaal, H.

    2011-01-01

    Transport Triggered Architectures (TTAs) possess many advantageous, such as modularity, flexibility, and scalability. As an exposed datapath architecture, TTAs can effectively reduce the register file (RF) pressure in both number of accesses and number of RF ports. However, the conventional TTAs

  15. An architecture for human-network interfaces

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1990-01-01

    Some of the issues (and their consequences) that arise when human-network interfaces (HNIs) are viewed from the perspective of people who use and develop them are examined. Target attributes of HNI architecture are presented. A high-level architecture model that supports the attributes is discussed...

  16. A First Step Towards High-Level Cost Models for the Implementation of SDRs on Multiprocessing Reconfigurable Systems

    DEFF Research Database (Denmark)

    Le Moullec, Yannick

    2011-01-01

    -In-Progress paper we introduce our set of high-level estimation models for Area-Time costs of applications mapped onto FPGA-based multiprocessing reconfigurable architectures. In particular, we suggest models for static and dynamic implementations, taking various internal and external architectural elements...... into account. We believe that such models could be used for rapidly comparing implementation alternatives at a high level of abstraction and for guiding the designer during the (pre)analysis phase of the design flow for the implementation of e.g. SDR platforms....

  17. High-performance field emission device utilizing vertically aligned carbon nanotubes-based pillar architectures

    Science.gov (United States)

    Gupta, Bipin Kumar; Kedawat, Garima; Gangwar, Amit Kumar; Nagpal, Kanika; Kashyap, Pradeep Kumar; Srivastava, Shubhda; Singh, Satbir; Kumar, Pawan; Suryawanshi, Sachin R.; Seo, Deok Min; Tripathi, Prashant; More, Mahendra A.; Srivastava, O. N.; Hahm, Myung Gwan; Late, Dattatray J.

    2018-01-01

    The vertical aligned carbon nanotubes (CNTs)-based pillar architectures were created on laminated silicon oxide/silicon (SiO2/Si) wafer substrate at 775 °C by using water-assisted chemical vapor deposition under low pressure process condition. The lamination was carried out by aluminum (Al, 10.0 nm thickness) as a barrier layer and iron (Fe, 1.5 nm thickness) as a catalyst precursor layer sequentially on a silicon wafer substrate. Scanning electron microscope (SEM) images show that synthesized CNTs are vertically aligned and uniformly distributed with a high density. The CNTs have approximately 2-30 walls with an inner diameter of 3-8 nm. Raman spectrum analysis shows G-band at 1580 cm-1 and D-band at 1340 cm-1. The G-band is higher than D-band, which indicates that CNTs are highly graphitized. The field emission analysis of the CNTs revealed high field emission current density (4mA/cm2 at 1.2V/μm), low turn-on field (0.6 V/μm) and field enhancement factor (6917) with better stability and longer lifetime. Emitter morphology resulting in improved promising field emission performances, which is a crucial factor for the fabrication of pillared shaped vertical aligned CNTs bundles as practical electron sources.

  18. High-performance field emission device utilizing vertically aligned carbon nanotubes-based pillar architectures

    Directory of Open Access Journals (Sweden)

    Bipin Kumar Gupta

    2018-01-01

    Full Text Available The vertical aligned carbon nanotubes (CNTs-based pillar architectures were created on laminated silicon oxide/silicon (SiO2/Si wafer substrate at 775 °C by using water-assisted chemical vapor deposition under low pressure process condition. The lamination was carried out by aluminum (Al, 10.0 nm thickness as a barrier layer and iron (Fe, 1.5 nm thickness as a catalyst precursor layer sequentially on a silicon wafer substrate. Scanning electron microscope (SEM images show that synthesized CNTs are vertically aligned and uniformly distributed with a high density. The CNTs have approximately 2–30 walls with an inner diameter of 3–8 nm. Raman spectrum analysis shows G-band at 1580 cm−1 and D-band at 1340 cm−1. The G-band is higher than D-band, which indicates that CNTs are highly graphitized. The field emission analysis of the CNTs revealed high field emission current density (4mA/cm2 at 1.2V/μm, low turn-on field (0.6 V/μm and field enhancement factor (6917 with better stability and longer lifetime. Emitter morphology resulting in improved promising field emission performances, which is a crucial factor for the fabrication of pillared shaped vertical aligned CNTs bundles as practical electron sources.

  19. Characteristics of Highly Birefringent Photonic Crystal Fiber with Defected Core and Equilateral Pentagon Architecture

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2016-01-01

    Full Text Available A novel high birefringence and nearly zero dispersion-flattened photonic crystal fiber (PCF with elliptical defected core (E-DC and equilateral pentagonal architecture is designed. By applying the full-vector finite element method (FEM, the characteristics of electric field distribution, birefringence, and chromatic dispersion of the proposed E-DC PCF are numerically investigated in detail. The simulation results reveal that the proposed PCF can realize high birefringence, ranging from 10-3 to 10-2 orders of magnitude, owing to the embedded elliptical air hole in the core center. However, the existence of the elliptical air hole gives rise to an extraordinary electric field distribution, where a V-shaped notch appears and the size of the V-shaped notch varies at different operating wavelengths. Also, the mode field diameter is estimated to be about 2 μm, which implies the small effective mode area and highly nonlinear coefficient. Furthermore, the investigation of the chromatic dispersion characteristic shows that the introduction of the elliptical air hole is helpful to control the chromatic dispersion to be negative or nearly zero flattened over a wide wavelength bandwidth.

  20. Building highly available control system applications with Advanced Telecom Computing Architecture and open standards

    International Nuclear Information System (INIS)

    Kazakov, Artem; Furukawa, Kazuro

    2010-01-01

    Requirements for modern and future control systems for large projects like International Linear Collider demand high availability for control system components. Recently telecom industry came up with a great open hardware specification - Advanced Telecom Computing Architecture (ATCA). This specification is aimed for better reliability, availability and serviceability. Since its first market appearance in 2004, ATCA platform has shown tremendous growth and proved to be stable and well represented by a number of vendors. ATCA is an industry standard for highly available systems. On the other hand Service Availability Forum, a consortium of leading communications and computing companies, describes interaction between hardware and software. SAF defines a set of specifications such as Hardware Platform Interface, Application Interface Specification. SAF specifications provide extensive description of highly available systems, services and their interfaces. Originally aimed for telecom applications, these specifications can be used for accelerator controls software as well. This study describes benefits of using these specifications and their possible adoption to accelerator control systems. It is demonstrated how EPICS Redundant IOC was extended using Hardware Platform Interface specification, which made it possible to utilize benefits of the ATCA platform.

  1. Corrugation Architecture Enabled Ultraflexible Wafer-Scale High-Efficiency Monocrystalline Silicon Solar Cell

    KAUST Repository

    Bahabry, Rabab R.

    2018-01-02

    Advanced classes of modern application require new generation of versatile solar cells showcasing extreme mechanical resilience, large-scale, low cost, and excellent power conversion efficiency. Conventional crystalline silicon-based solar cells offer one of the most highly efficient power sources, but a key challenge remains to attain mechanical resilience while preserving electrical performance. A complementary metal oxide semiconductor-based integration strategy where corrugation architecture enables ultraflexible and low-cost solar cell modules from bulk monocrystalline large-scale (127 × 127 cm) silicon solar wafers with a 17% power conversion efficiency. This periodic corrugated array benefits from an interchangeable solar cell segmentation scheme which preserves the active silicon thickness of 240 μm and achieves flexibility via interdigitated back contacts. These cells can reversibly withstand high mechanical stress and can be deformed to zigzag and bifacial modules. These corrugation silicon-based solar cells offer ultraflexibility with high stability over 1000 bending cycles including convex and concave bending to broaden the application spectrum. Finally, the smallest bending radius of curvature lower than 140 μm of the back contacts is shown that carries the solar cells segments.

  2. Corrugation Architecture Enabled Ultraflexible Wafer-Scale High-Efficiency Monocrystalline Silicon Solar Cell

    KAUST Repository

    Bahabry, Rabab R.; Kutbee, Arwa T.; Khan, Sherjeel M.; Sepulveda, Adrian C.; Wicaksono, Irmandy; Nour, Maha A.; Wehbe, Nimer; Almislem, Amani Saleh Saad; Ghoneim, Mohamed T.; Sevilla, Galo T.; Syed, Ahad; Shaikh, Sohail F.; Hussain, Muhammad Mustafa

    2018-01-01

    Advanced classes of modern application require new generation of versatile solar cells showcasing extreme mechanical resilience, large-scale, low cost, and excellent power conversion efficiency. Conventional crystalline silicon-based solar cells offer one of the most highly efficient power sources, but a key challenge remains to attain mechanical resilience while preserving electrical performance. A complementary metal oxide semiconductor-based integration strategy where corrugation architecture enables ultraflexible and low-cost solar cell modules from bulk monocrystalline large-scale (127 × 127 cm) silicon solar wafers with a 17% power conversion efficiency. This periodic corrugated array benefits from an interchangeable solar cell segmentation scheme which preserves the active silicon thickness of 240 μm and achieves flexibility via interdigitated back contacts. These cells can reversibly withstand high mechanical stress and can be deformed to zigzag and bifacial modules. These corrugation silicon-based solar cells offer ultraflexibility with high stability over 1000 bending cycles including convex and concave bending to broaden the application spectrum. Finally, the smallest bending radius of curvature lower than 140 μm of the back contacts is shown that carries the solar cells segments.

  3. Beam size measurement at high radiation levels

    International Nuclear Information System (INIS)

    Decker, F.J.

    1991-05-01

    At the end of the Stanford Linear Accelerator the high energy electron and positron beams are quite small. Beam sizes below 100 μm (σ) as well as the transverse distribution, especially tails, have to be determined. Fluorescent screens observed by TV cameras provide a quick two-dimensional picture, which can be analyzed by digitization. For running the SLAC Linear Collider (SLC) with low backgrounds at the interaction point, collimators are installed at the end of the linac. This causes a high radiation level so that the nearby cameras die within two weeks and so-called ''radiation hard'' cameras within two months. Therefore an optical system has been built, which guides a 5 mm wide picture with a resolution of about 30 μm over a distance of 12 m to an accessible region. The overall resolution is limited by the screen thickness, optical diffraction and the line resolution of the camera. Vibration, chromatic effects or air fluctuations play a much less important role. The pictures are colored to get fast information about the beam current, size and tails. Beside the emittance, more information about the tail size and betatron phase is obtained by using four screens. This will help to develop tail compensation schemes to decrease the emittance growth in the linac at high currents. 4 refs., 2 figs

  4. Parallel implementation of Gray Level Co-occurrence Matrices and Haralick texture features on cell architecture

    NARCIS (Netherlands)

    Shahbahrami, A.; Pham, T.A.; Bertels, K.L.M.

    2011-01-01

    Texture features extraction algorithms are key functions in various image processing applications such as medical images, remote sensing, and content-based image retrieval. The most common way to extract texture features is the use of Gray Level Co-occurrence Matrices (GLCMs). The GLCM contains the

  5. Work level related human factors for enterprise architecture as organisational strategy

    CSIR Research Space (South Africa)

    Gilliland, S

    2015-10-01

    Full Text Available organisational strategies, such as EA, if they know what behaviour to expect from stakeholders and why people act and react in a certain way. People react differently to strategic initiatives, such as the introduction of EA, depending on their work level and how...

  6. A high level language for a high performance computer

    Science.gov (United States)

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  7. Architectural prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders...

  8. An agent based architecture for high-risk neonate management at neonatal intensive care unit.

    Science.gov (United States)

    Malak, Jaleh Shoshtarian; Safdari, Reza; Zeraati, Hojjat; Nayeri, Fatemeh Sadat; Mohammadzadeh, Niloofar; Farajollah, Seide Sedighe Seied

    2018-01-01

    In recent years, the use of new tools and technologies has decreased the neonatal mortality rate. Despite the positive effect of using these technologies, the decisions are complex and uncertain in critical conditions when the neonate is preterm or has a low birth weight or malformations. There is a need to automate the high-risk neonate management process by creating real-time and more precise decision support tools. To create a collaborative and real-time environment to manage neonates with critical conditions at the NICU (Neonatal Intensive Care Unit) and to overcome high-risk neonate management weaknesses by applying a multi agent based analysis and design methodology as a new solution for NICU management. This study was a basic research for medical informatics method development that was carried out in 2017. The requirement analysis was done by reviewing articles on NICU Decision Support Systems. PubMed, Science Direct, and IEEE databases were searched. Only English articles published after 1990 were included; also, a needs assessment was done by reviewing the extracted features and current processes at the NICU environment where the research was conducted. We analyzed the requirements and identified the main system roles (agents) and interactions by a comparative study of existing NICU decision support systems. The Universal Multi Agent Platform (UMAP) was applied to implement a prototype of our multi agent based high-risk neonate management architecture. Local environment agents interacted inside a container and each container interacted with external resources, including other NICU systems and consultation centers. In the NICU container, the main identified agents were reception, monitoring, NICU registry, and outcome prediction, which interacted with human agents including nurses and physicians. Managing patients at the NICU units requires online data collection, real-time collaboration, and management of many components. Multi agent systems are applied as

  9. Alveolar architecture of clear cell renal carcinomas (≤5.0 cm) show high attenuation on dynamic CT scanning

    International Nuclear Information System (INIS)

    Fujimoto, Hiroyuki; Wakao, Fumihiko; Moriyama, Noriyuki; Tobisu, Kenichi; Kakizoe, Tadao; Sakamoto, Michiie

    1999-01-01

    To establish the correlation between tumor appearance on CT and tumor histology in renal cell carcinomas. The density and attenuation patterns of 96 renal cell carcinomas, each ≤5 cm in greatest diameter, were studied by non-enhanced CT and early and late after bolus injection of contrast medium using dynamic CT. The density and attenuation patterns and pathological maps of each tumor were individually correlated. High attenuated areas were present in 72 of the 96 tumors on early enhanced dynamic CT scanning. All 72 high attenuated areas were of the clear cell renal cell carcinoma and had alveolar architecture. The remaining 24 tumors that did not demonstrate high attenuated foci on early enhanced scanning included three clear cell, nine granular cell, six papillary, five chromophobe and one collecting duct type. With respect to tumor architecture, all clear cell tumors of alveolar architecture demonstrated high attenuation on early enhanced scanning. Clear cell renal cell carcinomas of alveolar architecture show high attenuation on early enhanced dynamic CT scanning. A larger number of patients are indispensable to obtaining clear results. However, these findings seem to be an important clue to the diagnosis of renal cell carcinomas as having an alveolar structure. (author)

  10. CAMAC and high-level-languages

    International Nuclear Information System (INIS)

    Degenhardt, K.H.

    1976-05-01

    A proposal for easy programming of CAMAC systems with high-level-languages (FORTRAN, RTL/2, etc.) and interpreters (BASIC, MUMTI, etc.) using a few subroutines and a LAM driver is presented. The subroutines and the LAM driver are implemented for PDP11/RSX-11M and for the CAMAC controllers DEC CA11A (branch controller), BORER type 1533A (single crate controller) and DEC CA11F (single crate controller). Mixed parallel/serial CAMAC systems employing KINETIC SYSTEMS serial driver mod. 3992 and serial crate controllers mod. 3950 are implemented for all mentioned parallel controllers, too. DMA transfers from or to CAMAC modules using non-processor-request controllers (BORER type 1542, DEC CA11FN) are available. (orig.) [de

  11. National high-level waste systems analysis

    International Nuclear Information System (INIS)

    Kristofferson, K.; O'Holleran, T.P.

    1996-01-01

    Previously, no mechanism existed that provided a systematic, interrelated view or national perspective of all high-level waste treatment and storage systems that the US Department of Energy manages. The impacts of budgetary constraints and repository availability on storage and treatment must be assessed against existing and pending negotiated milestones for their impact on the overall HLW system. This assessment can give DOE a complex-wide view of the availability of waste treatment and help project the time required to prepare HLW for disposal. Facilities, throughputs, schedules, and milestones were modeled to ascertain the treatment and storage systems resource requirements at the Hanford Site, Savannah River Site, Idaho National Engineering Laboratory, and West Valley Demonstration Project. The impacts of various treatment system availabilities on schedule and throughput were compared to repository readiness to determine the prudent application of resources. To assess the various impacts, the model was exercised against a number of plausible scenarios as discussed in this paper

  12. International high-level radioactive waste repositories

    International Nuclear Information System (INIS)

    Lin, W.

    1996-01-01

    Although nuclear technologies benefit everyone, the associated nuclear wastes are a widespread and rapidly growing problem. Nuclear power plants are in operation in 25 countries, and are under construction in others. Developing countries are hungry for electricity to promote economic growth; industrialized countries are eager to export nuclear technologies and equipment. These two ingredients, combined with the rapid shrinkage of worldwide fossil fuel reserves, will increase the utilization of nuclear power. All countries utilizing nuclear power produce at least a few tens of tons of spent fuel per year. That spent fuel (and reprocessing products, if any) constitutes high-level nuclear waste. Toxicity, long half-life, and immunity to chemical degradation make such waste an almost permanent threat to human beings. This report discusses the advantages of utilizing repositories for disposal of nuclear wastes

  13. High-level waste processing and disposal

    International Nuclear Information System (INIS)

    Crandall, J.L.; Krause, H.; Sombret, C.; Uematsu, K.

    1984-11-01

    Without reprocessing, spent LWR fuel itself is generally considered an acceptable waste form. With reprocessing, borosilicate glass canisters, have now gained general acceptance for waste immobilization. The current first choice for disposal is emplacement in an engineered structure in a mined cavern at a depth of 500-1000 meters. A variety of rock types are being investigated including basalt, clay, granite, salt, shale, and volcanic tuff. This paper gives specific coverage to the national high level waste disposal plans for France, the Federal Republic of Germany, Japan and the United States. The French nuclear program assumes prompt reprocessing of its spent fuels, and France has already constructed the AVM. Two larger borosilicate glass plants are planned for a new French reprocessing plant at La Hague. France plans to hold the glass canisters in near-surface storage for a forty to sixty year cooling period and then to place them into a mined repository. The FRG and Japan also plan reprocessing for their LWR fuels. Both are currently having some fuel reprocessed by France, but both are also planning reprocessing plants which will include waste vitrification facilities. West Germany is now constructing the PAMELA Plant at Mol, Belgium to vitrify high level reprocessing wastes at the shutdown Eurochemic Plant. Japan is now operating a vitrification mockup test facility and plans a pilot plant facility at the Tokai reprocessing plant by 1990. Both countries have active geologic repository programs. The United State program assumes little LWR fuel reprocessing and is thus primarily aimed at direct disposal of spent fuel into mined repositories. However, the US have two borosilicate glass plants under construction to vitrify existing reprocessing wastes

  14. The impact of optimize solar radiation received on the levels and energy disposal of levels on architectural design result by using computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Rezaei, Davood; Farajzadeh Khosroshahi, Samaneh; Sadegh Falahat, Mohammad [Zanjan University (Iran, Islamic Republic of)], email: d_rezaei@znu.ac.ir, email: ronas_66@yahoo.com, email: Safalahat@yahoo.com

    2011-07-01

    In order to minimize the energy consumption of a building it is important to achieve optimum solar energy. The aim of this paper is to introduce the use of computer modeling in the early stages of design to optimize solar radiation received and energy disposal in an architectural design. Computer modeling was performed on 2 different projects located in Los Angeles, USA, using ECOTECT software. Changes were made to the designs following analysis of the modeling results and a subsequent analysis was carried out on the optimized designs. Results showed that the computer simulation allows the designer to set the analysis criteria and improve the energy performance of a building before it is constructed; moreover, it can be used for a wide range of optimization levels. This study pointed out that computer simulation should be performed in the design stage to optimize a building's energy performance.

  15. Multi-Censor Fusion using Observation Merging with Central Level Architecture

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar; Ahmed, Zaki; Khan, M. Z.

    2011-01-01

    The use of multiple sensors typically requires the fusion of data from different type of sensors. The combined use of such a data has the potential to give an efficient, high quality and reliable estimation. Input data from different sensors allows the introduction of target attributes (target ty...

  16. High-Throughput Phenotyping and QTL Mapping Reveals the Genetic Architecture of Maize Plant Growth.

    Science.gov (United States)

    Zhang, Xuehai; Huang, Chenglong; Wu, Di; Qiao, Feng; Li, Wenqiang; Duan, Lingfeng; Wang, Ke; Xiao, Yingjie; Chen, Guoxing; Liu, Qian; Xiong, Lizhong; Yang, Wanneng; Yan, Jianbing

    2017-03-01

    With increasing demand for novel traits in crop breeding, the plant research community faces the challenge of quantitatively analyzing the structure and function of large numbers of plants. A clear goal of high-throughput phenotyping is to bridge the gap between genomics and phenomics. In this study, we quantified 106 traits from a maize ( Zea mays ) recombinant inbred line population ( n = 167) across 16 developmental stages using the automatic phenotyping platform. Quantitative trait locus (QTL) mapping with a high-density genetic linkage map, including 2,496 recombinant bins, was used to uncover the genetic basis of these complex agronomic traits, and 988 QTLs have been identified for all investigated traits, including three QTL hotspots. Biomass accumulation and final yield were predicted using a combination of dissected traits in the early growth stage. These results reveal the dynamic genetic architecture of maize plant growth and enhance ideotype-based maize breeding and prediction. © 2017 American Society of Plant Biologists. All Rights Reserved.

  17. Ultra-high resolution flat-panel volume CT: fundamental principles, design architecture, and system characterization

    International Nuclear Information System (INIS)

    Gupta, Rajiv; Brady, Tom; Grasruck, Michael; Suess, Christoph; Schmidt, Bernhard; Stierstorfer, Karl; Popescu, Stefan; Flohr, Thomas; Bartling, Soenke H.

    2006-01-01

    Digital flat-panel-based volume CT (VCT) represents a unique design capable of ultra-high spatial resolution, direct volumetric imaging, and dynamic CT scanning. This innovation, when fully developed, has the promise of opening a unique window on human anatomy and physiology. For example, the volumetric coverage offered by this technology enables us to observe the perfusion of an entire organ, such as the brain, liver, or kidney, tomographically (e.g., after a transplant or ischemic event). By virtue of its higher resolution, one can directly visualize the trabecular structure of bone. This paper describes the basic design architecture of VCT. Three key technical challenges, viz., scatter correction, dynamic range extension, and temporal resolution improvement, must be addressed for successful implementation of a VCT scanner. How these issues are solved in a VCT prototype and the modifications necessary to enable ultra-high resolution volumetric scanning are described. The fundamental principles of scatter correction and dose reduction are illustrated with the help of an actual prototype. The image quality metrics of this prototype are characterized and compared with a multi-detector CT (MDCT). (orig.)

  18. CHEETAH: circuit-switched high-speed end-to-end transport architecture

    Science.gov (United States)

    Veeraraghavan, Malathi; Zheng, Xuan; Lee, Hyuk; Gardner, M.; Feng, Wuchun

    2003-10-01

    Leveraging the dominance of Ethernet in LANs and SONET/SDH in MANs and WANs, we propose a service called CHEETAH (Circuit-switched High-speed End-to-End Transport ArcHitecture). The service concept is to provide end hosts with high-speed, end-to-end circuit connectivity on a call-by-call shared basis, where a "circuit" consists of Ethernet segments at the ends that are mapped into Ethernet-over-SONET long-distance circuits. This paper focuses on the file-transfer application for such circuits. For this application, the CHEETAH service is proposed as an add-on to the primary Internet access service already in place for enterprise hosts. This allows an end host that is sending a file to first attempt setting up an end-to-end Ethernet/EoS circuit, and if rejected, fall back to the TCP/IP path. If the circuit setup is successful, the end host will enjoy a much shorter file-transfer delay than on the TCP/IP path. To determine the conditions under which an end host with access to the CHEETAH service should attempt circuit setup, we analyze mean file-transfer delays as a function of call blocking probability in the circuit-switched network, probability of packet loss in the IP network, round-trip times, link rates, and so on.

  19. Ultra-high resolution flat-panel volume CT: fundamental principles, design architecture, and system characterization

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, Rajiv; Brady, Tom [Massachusetts General Hospital, Department of Radiology, Founders House, FND-2-216, Boston, MA (United States); Grasruck, Michael; Suess, Christoph; Schmidt, Bernhard; Stierstorfer, Karl; Popescu, Stefan; Flohr, Thomas [Siemens Medical Solutions, Forchheim (Germany); Bartling, Soenke H. [Hannover Medical School, Department of Neuroradiology, Hannover (Germany)

    2006-06-15

    Digital flat-panel-based volume CT (VCT) represents a unique design capable of ultra-high spatial resolution, direct volumetric imaging, and dynamic CT scanning. This innovation, when fully developed, has the promise of opening a unique window on human anatomy and physiology. For example, the volumetric coverage offered by this technology enables us to observe the perfusion of an entire organ, such as the brain, liver, or kidney, tomographically (e.g., after a transplant or ischemic event). By virtue of its higher resolution, one can directly visualize the trabecular structure of bone. This paper describes the basic design architecture of VCT. Three key technical challenges, viz., scatter correction, dynamic range extension, and temporal resolution improvement, must be addressed for successful implementation of a VCT scanner. How these issues are solved in a VCT prototype and the modifications necessary to enable ultra-high resolution volumetric scanning are described. The fundamental principles of scatter correction and dose reduction are illustrated with the help of an actual prototype. The image quality metrics of this prototype are characterized and compared with a multi-detector CT (MDCT). (orig.)

  20. Innovative anode materials and architectured cells for high temperature steam electrolysis operation

    International Nuclear Information System (INIS)

    Ogier, Tiphaine

    2012-01-01

    In order to improve the electrochemical performances of cells for high temperature steam electrolysis (HTSE), innovative oxygen electrode materials have been studied. The compounds Ln_2NiO_4_+_δ (Ln = La, Pr or Nd), Pr_4Ni_3O_1_0_±_δ and La_0_,_6S_r0_,_4Fe_0_,_8Co_0_,_2O_3_-_δ have been selected for their mixed electronic and ionic conductivity. First, their physical and chemical properties have been investigated. Then, the electrodes were shaped on symmetrical half cells,adding a thin ceria-based interlayer between the electrode and the yttria doped zirconia-based electrolyte. These architectured cells lead to low polarization resistances (RP≤ 0.1 Ω.cm"2 at 800 C) as well as reduced anodic over potentials. An electrochemical model has been developed in order to describe and analyze the experimental polarization curves.The electrode with the lower overpotential, i.e. Pr_2NiO_4_+δ, has been selected and characterized into complete cermet-supported cells. Under HTSE operation, at 800 C, a high current density was measured, close to i = -0.9 A.cm"-"2 for a cell voltage equals to 1.3 V, the conversion rate being about 60%. (author) [fr

  1. High-throughput volumetric reconstruction for 3D wheat plant architecture studies

    Directory of Open Access Journals (Sweden)

    Wei Fang

    2016-09-01

    Full Text Available For many tiller crops, the plant architecture (PA, including the plant fresh weight, plant height, number of tillers, tiller angle and stem diameter, significantly affects the grain yield. In this study, we propose a method based on volumetric reconstruction for high-throughput three-dimensional (3D wheat PA studies. The proposed methodology involves plant volumetric reconstruction from multiple images, plant model processing and phenotypic parameter estimation and analysis. This study was performed on 80 Triticum aestivum plants, and the results were analyzed. Comparing the automated measurements with manual measurements, the mean absolute percentage error (MAPE in the plant height and the plant fresh weight was 2.71% (1.08cm with an average plant height of 40.07cm and 10.06% (1.41g with an average plant fresh weight of 14.06g, respectively. The root mean square error (RMSE was 1.37cm and 1.79g for the plant height and plant fresh weight, respectively. The correlation coefficients were 0.95 and 0.96 for the plant height and plant fresh weight, respectively. Additionally, the proposed methodology, including plant reconstruction, model processing and trait extraction, required only approximately 20s on average per plant using parallel computing on a graphics processing unit (GPU, demonstrating that the methodology would be valuable for a high-throughput phenotyping platform.

  2. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yasaman Samei

    2008-08-01

    Full Text Available Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN. With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture. This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  3. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    Science.gov (United States)

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  4. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...... correlation between the study of existing architectures and the training of competences to design for present-day realities.......This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...

  5. Proof of Concept Integration of a Single-Level Service-Oriented Architecture into a Multi-Domain Secure Environment

    National Research Council Canada - National Science Library

    Gilkey, Craig M

    2008-01-01

    .... A SOA software platform integrates independent, unrelated applications into a common architecture, thereby introducing data reuse, interoperability, and loose coupling between the services involved. The U.S...

  6. A scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC [Superconducting Super Collider] detectors

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; VanBerg, R.

    1989-12-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequences, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of GigaBytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the system architecture are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build the self-routing parallel event builder will also be given in the paper. 3 figs., 1 tab

  7. An overview of an architecture proposal for a high energy physics Grid

    CERN Document Server

    Wäänänen, A; Konstantinov, A S; Kónya, B; Smirnova, O G

    2002-01-01

    The article gives an overview of a Grid testbed architecture proposal for the NorduGrid project. The aim of the project is to establish an inter-Nordic (Denmark, Norway, Sweden and Finland) testbed facility for implementation of wide area computing and data handling. The architecture is supposed to define a Grid system suitable for solving data intensive problems at the Large Hadron Collider at CERN. We present the various architecture components needed for such a system. After that we go on to give a description of the dynamics by showing the task flow. (12 refs).

  8. Network-level architecture and the evolutionary potential of underground metabolism.

    Science.gov (United States)

    Notebaart, Richard A; Szappanos, Balázs; Kintses, Bálint; Pál, Ferenc; Györkei, Ádám; Bogos, Balázs; Lázár, Viktória; Spohn, Réka; Csörgő, Bálint; Wagner, Allon; Ruppin, Eytan; Pál, Csaba; Papp, Balázs

    2014-08-12

    A central unresolved issue in evolutionary biology is how metabolic innovations emerge. Low-level enzymatic side activities are frequent and can potentially be recruited for new biochemical functions. However, the role of such underground reactions in adaptation toward novel environments has remained largely unknown and out of reach of computational predictions, not least because these issues demand analyses at the level of the entire metabolic network. Here, we provide a comprehensive computational model of the underground metabolism in Escherichia coli. Most underground reactions are not isolated and 45% of them can be fully wired into the existing network and form novel pathways that produce key precursors for cell growth. This observation allowed us to conduct an integrated genome-wide in silico and experimental survey to characterize the evolutionary potential of E. coli to adapt to hundreds of nutrient conditions. We revealed that underground reactions allow growth in new environments when their activity is increased. We estimate that at least ∼20% of the underground reactions that can be connected to the existing network confer a fitness advantage under specific environments. Moreover, our results demonstrate that the genetic basis of evolutionary adaptations via underground metabolism is computationally predictable. The approach used here has potential for various application areas from bioengineering to medical genetics.

  9. A TWO LEVEL ARCHITECTURE USING CONSENSUS METHOD FOR GLOBAL DECISION MAKING AGAINST DDoS ATTACKS

    Directory of Open Access Journals (Sweden)

    S.Seetha

    2010-06-01

    Full Text Available Distributed Denial of service is a major threat to the availability of internet services. Due to the distributed, large scale nature of the Internet makes DDoS (Distributed Denial-of-Service attacks stealthy and difficult to counter. Defense against Distributed Denial- of -Service attacks is one of the hardest security problems on the Internet. Recently these network attacks have been increasing. Therefore more effective countermeasures are required to counter the threat. This requirement has motivated us to propose a novel mechanism against DDoS attack. This paper presents the design details of a distributed defense mechanism against DDoS attack. In our approach, the egress routers of the intermediate network coordinate with each other to provide the information necessary to detect and respond to the attack. Thus, a detection system based on single site will have either high positive or high negative rates. Unlike the traditional IDSs (Intrusion Detection System this method has the potential to achieve high true positive ratio. This work has been done by using consensus algorithms for exchanging the information between the detection systems. So the overall detection time would be reduced for global decision making.

  10. Three Tier-Level Architecture Data Warehouse Design of Civil Servant Data in Minahasa Regency

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Runtuwene, J. P. A.; Sangkop, F. I.; Ngantung, L. V. F.

    2018-02-01

    Minahasa Regency is one of the regencies in North Sulawesi Province. In running the government in Minahasa Regency, a Regent is assisted by more than 6000 people Civil Servants (PNS) scattered in 60 SKPD. Badan Kepegawaian Diklat Daerah (BKDD) of Minahasa Regency is SKPD that performs data processing of all civil servants and is responsible for arranging and formatting civil servants. In the process of arranging and determining the formation of civil servants, many obstacles faced by BKDD. One of the obstacles is the unavailability of accurate data about the amount of educational background of civil servants based on rank/class, age, length of service, department, and so forth. The way to overcome the availability of data quickly and accurately is to do Business analytical. This process can be done by designing the data warehouse first. The design of data warehouse will be done by dividing it into three tiers of level.

  11. High-performance Sonitopia (Sonic Utopia): Hyper intelligent Material-based Architectural Systems for Acoustic Energy Harvesting

    Science.gov (United States)

    Heidari, F.; Mahdavinejad, M.

    2017-08-01

    The rate of energy consumption in all over the world, based on reliable statistics of international institutions such as the International Energy Agency (IEA) shows significant increase in energy demand in recent years. Periodical recorded data shows a continuous increasing trend in energy consumption especially in developed countries as well as recently emerged developing economies such as China and India. While air pollution and water contamination as results of high consumption of fossil energy resources might be consider as menace to civic ideals such as livability, conviviality and people-oriented cities. In other hand, automobile dependency, cars oriented design and other noisy activities in urban spaces consider as threats to urban life. Thus contemporary urban design and planning concentrates on rethinking about ecology of sound, reorganizing the soundscape of neighborhoods, redesigning the sonic order of urban space. It seems that contemporary architecture and planning trends through soundscape mapping look for sonitopia (Sonic + Utopia) This paper is to propose some interactive hyper intelligent material-based architectural systems for acoustic energy harvesting. The proposed architectural design system may be result in high-performance architecture and planning strategies for future cities. The ultimate aim of research is to develop a comprehensive system for acoustic energy harvesting which cover the aim of noise reduction as well as being in harmony with architectural design. The research methodology is based on a literature review as well as experimental and quasi-experimental strategies according the paradigm of designedly ways of doing and knowing. While architectural design has solution-focused essence in problem-solving process, the proposed systems had better be hyper intelligent rather than predefined procedures. Therefore, the steps of the inference mechanism of the research include: 1- understanding sonic energy and noise potentials as energy

  12. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Directory of Open Access Journals (Sweden)

    Guanyi Sun

    2011-01-01

    Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.

  13. High-level radioactive waste management

    International Nuclear Information System (INIS)

    Schneider, K.J.; Liikala, R.C.

    1974-01-01

    High-level radioactive waste in the U.S. will be converted to an encapsulated solid and shipped to a Federal repository for retrievable storage for extended periods. Meanwhile the development of concepts for ultimate disposal of the waste which the Federal Government would manage is being actively pursued. A number of promising concepts have been proposed, for which there is high confidence that one or more will be suitable for long-term, ultimate disposal. Initial evaluations of technical (or theoretical) feasibility for the various waste disposal concepts show that in the broad category, (i.e., geologic, seabed, ice sheet, extraterrestrial, and transmutation) all meet the criteria for judging feasibility, though a few alternatives within these categories do not. Preliminary cost estimates show that, although many millions of dollars may be required, the cost for even the most exotic concepts is small relative to the total cost of electric power generation. For example, the cost estimates for terrestrial disposal concepts are less than 1 percent of the total generating costs. The cost for actinide transmutation is estimated at around 1 percent of generation costs, while actinide element disposal in space is less than 5 percent of generating costs. Thus neither technical feasibility nor cost seems to be a no-go factor in selecting a waste management system. The seabed, ice sheet, and space disposal concepts face international policy constraints. The information being developed currently in safety, environmental concern, and public response will be important factors in determining which concepts appear most promising for further development

  14. Optimized Architectural Approaches in Hardware and Software Enabling Very High Performance Shared Storage Systems

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    There are issues encountered in high performance storage systems that normally lead to compromises in architecture. Compute clusters tend to have compute phases followed by an I/O phase that must move data from the entire cluster in one operation. That data may then be shared by a large number of clients creating unpredictable read and write patterns. In some cases the aggregate performance of a server cluster must exceed 100 GB/s to minimize the time required for the I/O cycle thus maximizing compute availability. Accessing the same content from multiple points in a shared file system leads to the classical problems of data "hot spots" on the disk drive side and access collisions on the data connectivity side. The traditional method for increasing apparent bandwidth usually includes data replication which is costly in both storage and management. Scaling a model that includes replicated data presents additional management challenges as capacity and bandwidth expand asymmetrically while the system is scaled. ...

  15. Chitin/clay microspheres with hierarchical architecture for highly efficient removal of organic dyes.

    Science.gov (United States)

    Xu, Rui; Mao, Jie; Peng, Na; Luo, Xiaogang; Chang, Chunyu

    2018-05-15

    Numerous adsorbents have been reported for efficient removal of dye from water, but the high cost raw materials and complicated fabrication process limit their practical applications. Herein, novel nanocomposite microspheres were fabricated from chitin and clay by a simple thermally induced sol-gel transition. Clay nanosheets were uniformly embedded in a nanofiber weaved chitin microsphere matrix, leading to their hierarchical architecture. Benefiting from this unique structure, microspheres could efficiently remove methylene blue (MB) through a spontaneous physic-sorption process which fit well with pseudo-second-order and Langmuir isotherm models. The maximal values of adsorption capability obtained by calculation and experiment were 152.2 and 156.7 mg g -1 , respectively. Chitin/clay microspheres (CCM2) could remove 99.99% MB from its aqueous solution (10 mg g -1 ) within 20 min. These findings provide insight into a new strategy for fabrication of dye adsorbents with hierarchical structure from low cost raw materials. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Vitrification of high-level liquid wastes

    International Nuclear Information System (INIS)

    Varani, J.L.; Petraitis, E.J.; Vazquez, Antonio.

    1987-01-01

    High-level radioactive liquid wastes produced in the fuel elements reprocessing require, for their disposal, a preliminary treatment by which, through a series of engineering barriers, the dispersion into the biosphere is delayed by 10 000 years. Four groups of compounds are distinguished among a great variety of final products and methods of elaboration. From these, the borosilicate glasses were chosen. Vitrification experiences were made at a laboratory scale with simulated radioactive wastes, employing different compositions of borosilicate glass. The installations are described. A series of tests were carried out on four basic formulae using always the same methodology, consisting of a dry mixture of the vitreous matrix's products and a dry simulated mixture. Several quality tests of the glasses were made 1: Behaviour in leaching following the DIN 12 111 standard; 2: Mechanical resistance; parameters related with the facility of the different glasses for increasing their surface were studied; 3: Degree of devitrification: it is shown that devitrification turns the glasses containing radioactive wastes easily leachable. From all the glasses tested, the composition SiO 2 , Al 2 O 3 , B 2 O 3 , Na 2 O, CaO shows the best retention characteristics. (M.E.L.) [es

  17. Ocean disposal of high level radioactive waste

    International Nuclear Information System (INIS)

    1983-01-01

    This study confirms, subject to limitations of current knowledge, the engineering feasibility of free fall penetrators for High Level Radioactive Waste disposal in deep ocean seabed sediments. Restricted sediment property information is presently the principal bar to an unqualified statement of feasibility. A 10m minimum embedment and a 500 year engineered barrier waste containment life are identified as appropriate basic penetrator design criteria at this stage. A range of designs are considered in which the length, weight and cross section of the penetrator are varied. Penetrators from 3m to 20m long and 2t to 100t in weight constructed of material types and thicknesses to give a 500 year containment life are evaluated. The report concludes that the greatest degree of confidence is associated with performance predictions for 75 to 200 mm thick soft iron and welded joints. A range of lengths and capacities from a 3m long single waste canister penetrator to a 20m long 12 canister design are identified as meriting further study. Estimated embedment depths for this range of penetrator designs lie between 12m and 90m. Alternative manufacture, transport and launch operations are assessed and recommendations are made. (author)

  18. Vitrification of high level wastes in France

    International Nuclear Information System (INIS)

    Sombret, C.

    1984-02-01

    A brief historical background of the research and development work conducted in France over 25 years is first presented. Then, the papers deals with the vitrification at (1) the UP1 reprocessing plant (Marcoule) and (2) the UP2 and UP3 reprocessing plants (La Hague). 1) The properties of glass required for high-level radioactive waste vitrification are recalled. The vitrification process and facility of Marcoule are presented. (2) The average characteristics (chemical composition, activity) of LWR fission product solution are given. The glass formulations developed to solidify LWR waste solution must meet the same requirements as those used in the UP1 facility at Marcoule. Three important aspects must be considered with respect to the glass fabrication process: corrosiveness of the molten glass with regard to metals, viscosity of the molten glass, and, volatization during glass fabrication. The glass properties required in view of interim storage and long-term disposal are then largely developed. Two identical vitrification facilities are planned for the site: T7, to process the UP2 throughput, and T7 for the UP3 plant. A prototype unit was built and operated at Marcoule

  19. High-level nuclear waste disposal

    International Nuclear Information System (INIS)

    Burkholder, H.C.

    1985-01-01

    The meeting was timely because many countries had begun their site selection processes and their engineering designs were becoming well-defined. The technology of nuclear waste disposal was maturing, and the institutional issues arising from the implementation of that technology were being confronted. Accordingly, the program was structured to consider both the technical and institutional aspects of the subject. The meeting started with a review of the status of the disposal programs in eight countries and three international nuclear waste management organizations. These invited presentations allowed listeners to understand the similarities and differences among the various national approaches to solving this very international problem. Then seven invited presentations describing nuclear waste disposal from different perspectives were made. These included: legal and judicial, electric utility, state governor, ethical, and technical perspectives. These invited presentations uncovered several issues that may need to be resolved before high-level nuclear wastes can be emplaced in a geologic repository in the United States. Finally, there were sixty-six contributed technical presentations organized in ten sessions around six general topics: site characterization and selection, repository design and in-situ testing, package design and testing, disposal system performance, disposal and storage system cost, and disposal in the overall waste management system context. These contributed presentations provided listeners with the results of recent applied RandD in each of the subject areas

  20. Decontamination of high-level waste canisters

    International Nuclear Information System (INIS)

    Nesbitt, J.F.; Slate, S.C.; Fetrow, L.K.

    1980-12-01

    This report presents evaluations of several methods for the in-process decontamination of metallic canisters containing any one of a number of solidified high-level waste (HLW) forms. The use of steam-water, steam, abrasive blasting, electropolishing, liquid honing, vibratory finishing and soaking have been tested or evaluated as potential techniques to decontaminate the outer surfaces of HLW canisters. Either these techniques have been tested or available literature has been examined to assess their applicability to the decontamination of HLW canisters. Electropolishing has been found to be the most thorough method to remove radionuclides and other foreign material that may be deposited on or in the outer surface of a canister during any of the HLW processes. Steam or steam-water spraying techniques may be adequate for some applications but fail to remove all contaminated forms that could be present in some of the HLW processes. Liquid honing and abrasive blasting remove contamination and foreign material very quickly and effectively from small areas and components although these blasting techniques tend to disperse the material removed from the cleaned surfaces. Vibratory finishing is very capable of removing the bulk of contamination and foreign matter from a variety of materials. However, special vibratory finishing equipment would have to be designed and adapted for a remote process. Soaking techniques take long periods of time and may not remove all of the smearable contamination. If soaking involves pickling baths that use corrosive agents, these agents may cause erosion of grain boundaries that results in rough surfaces

  1. High-Resolution X-Ray Tomography: A 3D Exploration Into the Skeletal Architecture in Mouse Models Submitted to Microgravity Constraints

    Directory of Open Access Journals (Sweden)

    Alessandra Giuliani

    2018-03-01

    Full Text Available Bone remodeling process consists in a slow building phase and in faster resorption with the objective to maintain a functional skeleton locomotion to counteract the Earth gravity. Thus, during spaceflights, the skeleton does not act against gravity, with a rapid decrease of bone mass and density, favoring bone fracture. Several studies approached the problem by imaging the bone architecture and density of cosmonauts returned by the different spaceflights. However, the weaknesses of the previously reported studies was two-fold: on the one hand the research suffered the small statistical sample size of almost all human spaceflight studies, on the other the results were not fully reliable, mainly due to the fact that the observed bone structures were small compared with the spatial resolution of the available imaging devices. The recent advances in high-resolution X-ray tomography have stimulated the study of weight-bearing skeletal sites by novel approaches, mainly based on the use of the mouse and its various strains as an animal model, and sometimes taking advantage of the synchrotron radiation support to approach studies of 3D bone architecture and mineralization degree mapping at different hierarchical levels. Here we report the first, to our knowledge, systematic review of the recent advances in studying the skeletal bone architecture by high-resolution X-ray tomography after submission of mice models to microgravity constrains.

  2. MT-ADRES: Multithreading on Coarse-Grained Reconfigurable Architecture

    DEFF Research Database (Denmark)

    Wu, Kehuai; Kanstein, Andreas; Madsen, Jan

    2007-01-01

    The coarse-grained reconfigurable architecture ADRES (Architecture for Dynamically Reconfigurable Embedded Systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high-ILP archi......The coarse-grained reconfigurable architecture ADRES (Architecture for Dynamically Reconfigurable Embedded Systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high......-ILP architectures achieve only low parallelism when executing partially sequential code segments, which is also known as Amdahl’s law, this paper proposes to extend ADRES to MT-ADRES (Multi-Threaded ADRES) to also exploit thread-level parallelism. On MT-ADRES architectures, the array can be partitioned in multiple...

  3. MZDASoft: a software architecture that enables large-scale comparison of protein expression levels over multiple samples based on liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Ghanat Bari, Mehrab; Ramirez, Nelson; Wang, Zhiwei; Zhang, Jianqiu Michelle

    2015-10-15

    Without accurate peak linking/alignment, only the expression levels of a small percentage of proteins can be compared across multiple samples in Liquid Chromatography/Mass Spectrometry/Tandem Mass Spectrometry (LC/MS/MS) due to the selective nature of tandem MS peptide identification. This greatly hampers biomedical research that aims at finding biomarkers for disease diagnosis, treatment, and the understanding of disease mechanisms. A recent algorithm, PeakLink, has allowed the accurate linking of LC/MS peaks without tandem MS identifications to their corresponding ones with identifications across multiple samples collected from different instruments, tissues and labs, which greatly enhanced the ability of comparing proteins. However, PeakLink cannot be implemented practically for large numbers of samples based on existing software architectures, because it requires access to peak elution profiles from multiple LC/MS/MS samples simultaneously. We propose a new architecture based on parallel processing, which extracts LC/MS peak features, and saves them in database files to enable the implementation of PeakLink for multiple samples. The software has been deployed in High-Performance Computing (HPC) environments. The core part of the software, MZDASoft Parallel Peak Extractor (PPE), can be downloaded with a user and developer's guide, and it can be run on HPC centers directly. The quantification applications, MZDASoft TandemQuant and MZDASoft PeakLink, are written in Matlab, which are compiled with a Matlab runtime compiler. A sample script that incorporates all necessary processing steps of MZDASoft for LC/MS/MS quantification in a parallel processing environment is available. The project webpage is http://compgenomics.utsa.edu/zgroup/MZDASoft. The proposed architecture enables the implementation of PeakLink for multiple samples. Significantly more (100%-500%) proteins can be compared over multiple samples with better quantification accuracy in test cases. MZDASoft

  4. Multiprocessor architecture: Synthesis and evaluation

    Science.gov (United States)

    Standley, Hilda M.

    1990-01-01

    Multiprocessor computed architecture evaluation for structural computations is the focus of the research effort described. Results obtained are expected to lead to more efficient use of existing architectures and to suggest designs for new, application specific, architectures. The brief descriptions given outline a number of related efforts directed toward this purpose. The difficulty is analyzing an existing architecture or in designing a new computer architecture lies in the fact that the performance of a particular architecture, within the context of a given application, is determined by a number of factors. These include, but are not limited to, the efficiency of the computation algorithm, the programming language and support environment, the quality of the program written in the programming language, the multiplicity of the processing elements, the characteristics of the individual processing elements, the interconnection network connecting processors and non-local memories, and the shared memory organization covering the spectrum from no shared memory (all local memory) to one global access memory. These performance determiners may be loosely classified as being software or hardware related. This distinction is not clear or even appropriate in many cases. The effect of the choice of algorithm is ignored by assuming that the algorithm is specified as given. Effort directed toward the removal of the effect of the programming language and program resulted in the design of a high-level parallel programming language. Two characteristics of the fundamental structure of the architecture (memory organization and interconnection network) are examined.

  5. Exploratory study of a novel low occupancy vertex detector architecture based on high precision timing for high luminosity particle colliders

    Energy Technology Data Exchange (ETDEWEB)

    Orel, Peter, E-mail: porel@hawaii.edu; Varner, Gary S.; Niknejadi, Pardis

    2017-06-11

    Vertex detectors provide space–time coordinates for the traversing charged particle decay products closest to the interaction point. Resolving these increasingly intense particle fluences at high luminosity particle colliders, such as SuperKEKB, is an ever growing challenge. This results in a non-negligible occupancy of the vertex detector using existing low material budget techniques. Consequently, new approaches are being studied that meet the vertexing requirements while lowering the occupancy. In this paper, we introduce a novel vertex detector architecture. Its design relies on an asynchronous digital pixel matrix in combination with a readout based on high precision time-of-flight measurement. Denoted the Timing Vertex Detector (TVD), it consists of a binary pixel array, a transmission line for signal collection, and a readout ASIC. The TVD aims to have a spatial resolution comparable to the existing Belle2 vertex detector. At the same time it offers a reduced occupancy by a factor of ten while decreasing the channel count by almost three orders of magnitude. Consequently, reducing the event size from about 1 MB/event to about 5.9 kB/event.

  6. DEFENSE HIGH LEVEL WASTE GLASS DEGRADATION

    International Nuclear Information System (INIS)

    Ebert, W.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the analyses that were done to develop models for radionuclide release from high-level waste (HLW) glass dissolution that can be integrated into performance assessment (PA) calculations conducted to support site recommendation and license application for the Yucca Mountain site. This report was developed in accordance with the ''Technical Work Plan for Waste Form Degradation Process Model Report for SR'' (CRWMS M andO 2000a). It specifically addresses the item, ''Defense High Level Waste Glass Degradation'', of the product technical work plan. The AP-3.15Q Attachment 1 screening criteria determines the importance for its intended use of the HLW glass model derived herein to be in the category ''Other Factors for the Postclosure Safety Case-Waste Form Performance'', and thus indicates that this factor does not contribute significantly to the postclosure safety strategy. Because the release of radionuclides from the glass will depend on the prior dissolution of the glass, the dissolution rate of the glass imposes an upper bound on the radionuclide release rate. The approach taken to provide a bound for the radionuclide release is to develop models that can be used to calculate the dissolution rate of waste glass when contacted by water in the disposal site. The release rate of a particular radionuclide can then be calculated by multiplying the glass dissolution rate by the mass fraction of that radionuclide in the glass and by the surface area of glass contacted by water. The scope includes consideration of the three modes by which water may contact waste glass in the disposal system: contact by humid air, dripping water, and immersion. The models for glass dissolution under these contact modes are all based on the rate expression for aqueous dissolution of borosilicate glasses. The mechanism and rate expression for aqueous dissolution are adequately understood; the analyses in this AMR were conducted to

  7. Electrospun fibers for high performance anodes in microbial fuel cells. Optimizing materials and architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Shuiliang

    2010-04-15

    the results of above, the porosity and the pore size in the fiber mat are utmost important for the performance of anode in MFCs. With concept of curve or helix in fibers can lead to higher porosity in the fiber mat, a novel 3D porous architecture, nanospring, was designed for high performance anode structure in future MFC. Polymeric nanospring was prepared by bicomponent electrospinning. The reasons for the formation of polymeric nanosprings were investigated by coaxial electrospinning of bicomponent rigid i.e. Nomex {sup registered} or polysulfonamide (PSA) (rigid) and flexible polymers i.e. thermoplastic polyurethane (TPU) (flexible). The results indicated that the nanospring formation is attributed to longitudinal compressive forces which are resulted from the different shrinkages of the rigid and flexible two polymer components and a good electrical conductivity of one of the polymer solutions in coaxial electrospinning system. The modified electrospinning i.e. off-centered electrospinning and side-by-side electrospinning are much more effective than the coaxial electrospinning for generating polymer spring or helical structures, because of the higher longitudinal compressive forces which derived from the lopsided elastic forces. The aligned nanofiber mat with high percent of nanospring shows higher elongation and higher storage modulus below transition glass temperature (T{sub g}) compared to that with straight fibers. The nanospring or helical shape preserves much void-space in the mat. It would be a potential architecture for highly efficient anode in future MFCs. (orig.)

  8. Arrayed architectures for multi-stage Si-micromachined high-flow Knudsen pumps

    International Nuclear Information System (INIS)

    Qin, Yutao; An, Seungdo; Gianchandani, Yogesh B

    2015-01-01

    This paper reports an evaluation and a comparison of two architectures for implementing Si-micromachined high-flow Knudsen pumps. Knudsen pumps, which operate on the principle of thermal transpiration, have been shown to have great promise for micro-scale gas phase fluidic systems such as micro gas chromatographs. Simultaneously achieving both a high flow rate and adequate blocking pressure has been a persistent challenge, which is addressed in this work by combining multiple pumps in series and addressing the resulting challenges in thermal management. The basic building block is a Si-micromachined pump with  ≈100 000 parallel channels in a 4 mm  ×  6 mm footprint. In the primary approach, multiple pump stages are stacked vertically with interleaved Si-micromachined spacers. A stacked 4-stage Knudsen pump has a form factor of 10 mm  ×  8 mm  ×  6 mm. In an alternate approach, multiple stages are arranged in a planar array. The experimental results demonstrate multiplication of the output pressure head with the number of stages, while the flow rate is maintained. For example, a stacked 4-stage Knudsen pump with 8 W power operated at atmospheric pressure provided a blocking pressure of 0.255 kPa, which was 3.6  ×  of that provided by a single-stage pump with 2 W power; while both provided a  ≈  30 sccm maximum flow rate. The performance can be customized for practical applications such as micro gas chromatography. (paper)

  9. SCC500: next-generation infrared imaging camera core products with highly flexible architecture for unique camera designs

    Science.gov (United States)

    Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott

    2003-09-01

    A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.

  10. Readout Architecture for Hybrid Pixel Readout Chips

    CERN Document Server

    AUTHOR|(SzGeCERN)694170; Westerlund, Tomi; Wyllie, Ken

    The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99 % with half the output rate as a bus-based system. The network-based solution avoids ``broken'' columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of $>$ 10 % to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling ($TLM$) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of $>$ 10 in run-time...

  11. Nova performance at ultra high fluence levels

    International Nuclear Information System (INIS)

    Hunt, J.T.

    1986-01-01

    Nova is a ten beam high power Nd:glass laser used for interial confinement fusion research. It was operated in the high power high energy regime following the completion of construction in December 1984. During this period several interesting nonlinear optical phenomena were observed. These phenomena are discussed in the text. 11 refs., 5 figs

  12. Thermo-aeraulics of high level waste storage facilities

    International Nuclear Information System (INIS)

    Lagrave, Herve; Gaillard, Jean-Philippe; Laurent, Franck; Ranc, Guillaume; Duret, Bernard

    2006-01-01

    This paper discusses the research undertaken in response to axis 3 of the 1991 radioactive waste management act, and possible solutions concerning the processes under consideration for conditioning and long-term interim storage of long-lived radioactive waste. The notion of 'long-term' is evaluated with respect to the usual operating lifetime of a basic nuclear installation, about 50 years. In this context, 'long-term' is defined on a secular time scale: the lifetime of the facility could be as long as 300 years. The waste package taken into account is characterized notably by its high thermal power release. Studies were carried out in dedicated facilities for vitrified waste and for spent UOX and MOX fuel. The latter are not considered as wastes, owing to the value of the reusable material they contain. Three primary objectives have guided the design of these long-term interim storage facilities: - ensure radionuclide containment at all times; - permit retrieval of the containers at any time; - minimize surveillance; - maintenance costs. The CEA has also investigated surface and subsurface facilities. It was decided to work on generic sites with a reasonable set of parameters values that should be applicable at most sites in France. All the studies and demonstrations to date lead to the conclusion that long-term interim storage is technically feasible. The paper addresses the following items: - Long-term interim storage concepts for high-level waste; - Design principles and options for the interim storage facilities; - General architecture; - Research topics, Storage facility ventilation, Dimensioning of the facility; - Thermo-aeraulics of a surface interim storage facility; - VALIDA surface loop, VALIDA single container test campaign, Continuation of the VALIDA program; - Thermo-aeraulics of a network of subsurface interim storage galleries; - SIGAL subsurface loop; - PROMETHEE subsurface loop; - Temperature behaviour of the concrete structures; - GALATEE

  13. Thermo-aeraulics of high level waste storage facilities

    Energy Technology Data Exchange (ETDEWEB)

    Lagrave, Herve; Gaillard, Jean-Philippe; Laurent, Franck; Ranc, Guillaume [CEA/Valrho, B.P. 17171, F-30207 Bagnols-sur-Ceze (France); Duret, Bernard [CEA Grenoble, 17 rue des Martyrs, 38054 Grenoble cedex 9 (France)

    2006-07-01

    This paper discusses the research undertaken in response to axis 3 of the 1991 radioactive waste management act, and possible solutions concerning the processes under consideration for conditioning and long-term interim storage of long-lived radioactive waste. The notion of 'long-term' is evaluated with respect to the usual operating lifetime of a basic nuclear installation, about 50 years. In this context, 'long-term' is defined on a secular time scale: the lifetime of the facility could be as long as 300 years. The waste package taken into account is characterized notably by its high thermal power release. Studies were carried out in dedicated facilities for vitrified waste and for spent UOX and MOX fuel. The latter are not considered as wastes, owing to the value of the reusable material they contain. Three primary objectives have guided the design of these long-term interim storage facilities: - ensure radionuclide containment at all times; - permit retrieval of the containers at any time; - minimize surveillance; - maintenance costs. The CEA has also investigated surface and subsurface facilities. It was decided to work on generic sites with a reasonable set of parameters values that should be applicable at most sites in France. All the studies and demonstrations to date lead to the conclusion that long-term interim storage is technically feasible. The paper addresses the following items: - Long-term interim storage concepts for high-level waste; - Design principles and options for the interim storage facilities; - General architecture; - Research topics, Storage facility ventilation, Dimensioning of the facility; - Thermo-aeraulics of a surface interim storage facility; - VALIDA surface loop, VALIDA single container test campaign, Continuation of the VALIDA program; - Thermo-aeraulics of a network of subsurface interim storage galleries; - SIGAL subsurface loop; - PROMETHEE subsurface loop; - Temperature behaviour of the concrete

  14. High performance matrix inversion based on LU factorization for multicore architectures

    KAUST Repository

    Dongarra, Jack

    2011-01-01

    The goal of this paper is to present an efficient implementation of an explicit matrix inversion of general square matrices on multicore computer architecture. The inversion procedure is split into four steps: 1) computing the LU factorization, 2) inverting the upper triangular U factor, 3) solving a linear system, whose solution yields inverse of the original matrix and 4) applying backward column pivoting on the inverted matrix. Using a tile data layout, which represents the matrix in the system memory with an optimized cache-aware format, the computation of the four steps is decomposed into computational tasks. A directed acyclic graph is generated on the fly which represents the program data flow. Its nodes represent tasks and edges the data dependencies between them. Previous implementations of matrix inversions, available in the state-of-the-art numerical libraries, are suffer from unnecessary synchronization points, which are non-existent in our implementation in order to fully exploit the parallelism of the underlying hardware. Our algorithmic approach allows to remove these bottlenecks and to execute the tasks with loose synchronization. A runtime environment system called QUARK is necessary to dynamically schedule our numerical kernels on the available processing units. The reported results from our LU-based matrix inversion implementation significantly outperform the state-of-the-art numerical libraries such as LAPACK (5x), MKL (5x) and ScaLAPACK (2.5x) on a contemporary AMD platform with four sockets and the total of 48 cores for a matrix of size 24000. A power consumption analysis shows that our high performance implementation is also energy efficient and substantially consumes less power than its competitors. © 2011 ACM.

  15. GiA Roots: software for the high throughput analysis of plant root system architecture

    Science.gov (United States)

    2012-01-01

    Background Characterizing root system architecture (RSA) is essential to understanding the development and function of vascular plants. Identifying RSA-associated genes also represents an underexplored opportunity for crop improvement. Software tools are needed to accelerate the pace at which quantitative traits of RSA are estimated from images of root networks. Results We have developed GiA Roots (General Image Analysis of Roots), a semi-automated software tool designed specifically for the high-throughput analysis of root system images. GiA Roots includes user-assisted algorithms to distinguish root from background and a fully automated pipeline that extracts dozens of root system phenotypes. Quantitative information on each phenotype, along with intermediate steps for full reproducibility, is returned to the end-user for downstream analysis. GiA Roots has a GUI front end and a command-line interface for interweaving the software into large-scale workflows. GiA Roots can also be extended to estimate novel phenotypes specified by the end-user. Conclusions We demonstrate the use of GiA Roots on a set of 2393 images of rice roots representing 12 genotypes from the species Oryza sativa. We validate trait measurements against prior analyses of this image set that demonstrated that RSA traits are likely heritable and associated with genotypic differences. Moreover, we demonstrate that GiA Roots is extensible and an end-user can add functionality so that GiA Roots can estimate novel RSA traits. In summary, we show that the software can function as an efficient tool as part of a workflow to move from large numbers of root images to downstream analysis. PMID:22834569

  16. High bicarbonate levels in narcoleptic children.

    Science.gov (United States)

    Franco, Patricia; Junqua, Aurelie; Guignard-Perret, Anne; Raoux, Aude; Perier, Magali; Raverot, Veronique; Claustrat, Bruno; Gustin, Marie-Paule; Inocente, Clara Odilia; Lin, Jian-Sheng

    2016-04-01

    The objective of this study was to evaluate the levels of plasma bicarbonate levels in narcoleptic children. Clinical, electrophysiological data and bicarbonate levels were evaluated retrospectively in children seen in our paediatric national reference centre for hypersomnia. The cohort included 23 control subjects (11.5 ± 4 years, 43% boys) and 51 patients presenting de-novo narcolepsy (N) (12.7 ± 3.7 years, 47% boys). In narcoleptic children, cataplexy was present in 78% and DQB1*0602 was positive in 96%. The control children were less obese (2 versus 47%, P = 0.001). Compared with control subjects, narcoleptic children had higher bicarbonate levels (P = 0.02) as well as higher PCO2 (P < 0.01) and lower venous pH gas (P < 0.01). Bicarbonate levels higher than 27 mmol L(-1) were found in 41.2% of the narcoleptic children and 4.2% of the controls (P = 0.001). Bicarbonate levels were correlated with the Adapted Epworth Sleepiness Scale (P = 0.01). Narcoleptic patients without obesity often had bicarbonate levels higher than 27 mmol L (-1) (55 versus 25%, P = 0.025). No differences were found between children with and without cataplexy. In conclusion, narcoleptic patients had higher bicarbonate plasma levels compared to control children. This result could be a marker of hypoventilation in this pathology, provoking an increase in PCO2 and therefore a respiratory acidosis, compensated by an increase in plasma bicarbonates. This simple screening tool could be useful for prioritizing children for sleep laboratory evaluation in practice. © 2015 European Sleep Research Society.

  17. Vision in high-level football officials.

    Science.gov (United States)

    Baptista, António Manuel Gonçalves; Serra, Pedro M; McAlinden, Colm; Barrett, Brendan T

    2017-01-01

    Officiating in football depends, at least to some extent, upon adequate visual function. However, there is no vision standard for football officiating and the nature of the relationship between officiating performance and level of vision is unknown. As a first step in characterising this relationship, we report on the clinically-measured vision and on the perceived level of vision in elite-level, Portuguese football officials. Seventy-one referees (R) and assistant referees (AR) participated in the study, representing 92% of the total population of elite level football officials in Portugal in the 2013/2014 season. Nine of the 22 Rs (40.9%) and ten of the 49 ARs (20.4%) were international-level. Information about visual history was also gathered. Perceived vision was assessed using the preference-values-assigned-to-global-visual-status (PVVS) and the Quality-of-Vision (QoV) questionnaire. Standard clinical vision measures (including visual acuity, contrast sensitivity and stereopsis) were gathered in a subset (n = 44, 62%) of the participants. Data were analysed according to the type (R/AR) and level (international/national) of official, and Bonferroni corrections were applied to reduce the risk of type I errors. Adopting criterion for statistical significance of pfootball officials were similar to published normative values for young, adult populations and similar between R and AR. Clinically-measured vision did not differ according to officiating level. Visual acuity measured with and without a pinhole disc indicated that around one quarter of participants may be capable of better vision when officiating, as evidenced by better acuity (≥1 line of letters) using the pinhole. Amongst the clinical visual tests we used, we did not find evidence for above-average performance in elite-level football officials. Although the impact of uncorrected mild to moderate refractive error upon officiating performance is unknown, with a greater uptake of eye examinations, visual

  18. Modeling Architectural Patterns’ Behavior Using Architectural Primitives

    NARCIS (Netherlands)

    Waqas Kamal, Ahmad; Avgeriou, Paris

    2008-01-01

    Architectural patterns have an impact on both the structure and the behavior of a system at the architecture design level. However, it is challenging to model patterns’ behavior in a systematic way because modeling languages do not provide the appropriate abstractions and because each pattern

  19. Vision in high-level football officials.

    Directory of Open Access Journals (Sweden)

    António Manuel Gonçalves Baptista

    Full Text Available Officiating in football depends, at least to some extent, upon adequate visual function. However, there is no vision standard for football officiating and the nature of the relationship between officiating performance and level of vision is unknown. As a first step in characterising this relationship, we report on the clinically-measured vision and on the perceived level of vision in elite-level, Portuguese football officials. Seventy-one referees (R and assistant referees (AR participated in the study, representing 92% of the total population of elite level football officials in Portugal in the 2013/2014 season. Nine of the 22 Rs (40.9% and ten of the 49 ARs (20.4% were international-level. Information about visual history was also gathered. Perceived vision was assessed using the preference-values-assigned-to-global-visual-status (PVVS and the Quality-of-Vision (QoV questionnaire. Standard clinical vision measures (including visual acuity, contrast sensitivity and stereopsis were gathered in a subset (n = 44, 62% of the participants. Data were analysed according to the type (R/AR and level (international/national of official, and Bonferroni corrections were applied to reduce the risk of type I errors. Adopting criterion for statistical significance of p<0.01, PVVS scores did not differ between R and AR (p = 0.88, or between national- and international-level officials (p = 0.66. Similarly, QoV scores did not differ between R and AR in frequency (p = 0.50, severity (p = 0.71 or bothersomeness (p = 0.81 of symptoms, or between international-level vs national-level officials for frequency (p = 0.03 or bothersomeness (p = 0.07 of symptoms. However, international-level officials reported less severe symptoms than their national-level counterparts (p<0.01. Overall, 18.3% of officials had either never had an eye examination or if they had, it was more than 3 years previously. Regarding refractive correction, 4.2% had undergone refractive surgery and

  20. FPGA Implementation of Blue Whale Calls Classifier Using High-Level Programming Tool

    Directory of Open Access Journals (Sweden)

    Mohammed Bahoura

    2016-02-01

    Full Text Available In this paper, we propose a hardware-based architecture for automatic blue whale calls classification based on short-time Fourier transform and multilayer perceptron neural network. The proposed architecture is implemented on field programmable gate array (FPGA using Xilinx System Generator (XSG and the Nexys-4 Artix-7 FPGA board. This high-level programming tool allows us to design, simulate and execute the compiled design in Matlab/Simulink environment quickly and easily. Intermediate signals obtained at various steps of the proposed system are presented for typical blue whale calls. Classification performances based on the fixed-point XSG/FPGA implementation are compared to those obtained by the floating-point Matlab simulation, using a representative database of the blue whale calls.

  1. Statistics of high-level scene context.

    Science.gov (United States)

    Greene, Michelle R

    2013-01-01

    CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics

  2. Transforming the existing building stock to high performed energy efficient and experienced architecture

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    The project Sustainable Renovation examines the challenge of the current and future architectural renovation of Danish suburbs which were designed in the period from 1945 to 1973. The research project takes its starting point in the perspectives of energy optimization and the fact that the building...

  3. Establishment of a Digital Knowledge Conversion Architecture Design Learning with High User Acceptance

    Science.gov (United States)

    Wu, Yun-Wu; Weng, Apollo; Weng, Kuo-Hua

    2017-01-01

    The purpose of this study is to design a knowledge conversion and management digital learning system for architecture design learning, helping students to share, extract, use and create their design knowledge through web-based interactive activities based on socialization, internalization, combination and externalization process in addition to…

  4. Progress in the High Level Trigger Integration

    CERN Multimedia

    Cristobal Padilla

    2007-01-01

    During the week from March 19th to March 23rd, the DAQ/HLT group performed another of its technical runs. On this occasion the focus was on integrating the Level 2 and Event Filter triggers, with a much fuller integration of HLT components than had been done previously. For the first time this included complete trigger slices, with a menu to run the selection algorithms for muons, electrons, jets and taus at the Level-2 and Event Filter levels. This Technical run again used the "Pre-Series" system (a vertical slice prototype of the DAQ/HLT system, see the ATLAS e-news January issue for details). Simulated events, provided by our colleagues working in the streaming tests, were pre-loaded into the ROS (Read Out System) nodes. These are the PC's where the data from the detector is stored after coming out of the front-end electronics, the "first part of the TDAQ system" and the interface to the detectors. These events used a realistic beam interaction mixture and had been subjected to a Level-1 selection. The...

  5. The architecture and artistic features of high-rise buildings in USSR and the United States of America during the first half of the twentieth century

    Directory of Open Access Journals (Sweden)

    Golovina Svetlana

    2018-01-01

    Full Text Available Skyscraper is a significant architectural structure in the world's largest cities. The appearance of a skyscraper in the city's architectural composition enhances its status, introduces dynamics into the shape of the city, modernizes the existing environment. Its architectural structure which can have both expressive triumphal forms and ascetic ones. For a deep understanding of the architecture of high-rise buildings must be considered by several criteria. Various approaches can be found in the competitive development of high-rise buildings in Moscow and the US cities in the middle of the twentieth century In this article we will consider how and on the basis of what the architectural decisions of high-rise buildings were formed.

  6. The architecture and artistic features of high-rise buildings in USSR and the United States of America during the first half of the twentieth century

    Science.gov (United States)

    Golovina, Svetlana; Oblasov, Yurii

    2018-03-01

    Skyscraper is a significant architectural structure in the world's largest cities. The appearance of a skyscraper in the city's architectural composition enhances its status, introduces dynamics into the shape of the city, modernizes the existing environment. Its architectural structure which can have both expressive triumphal forms and ascetic ones. For a deep understanding of the architecture of high-rise buildings must be considered by several criteria. Various approaches can be found in the competitive development of high-rise buildings in Moscow and the US cities in the middle of the twentieth century In this article we will consider how and on the basis of what the architectural decisions of high-rise buildings were formed.

  7. A new architecture for low-power high-speed pipelined ADCs using double-sampling and opamp-sharing techniques

    NARCIS (Netherlands)

    Abdinia, S.; Yavari, M.

    2009-01-01

    This paper presents a low-voltage low-power pipelined ADC with 1V supply voltage in a 90nm CMOS process. A new architecture is proposed to reduce the power consumption in high-speed pipelined analog-to-digital converters (ADCs). The presented architecture utilizes a combination of two current

  8. A Heterogeneous Quantum Computer Architecture

    NARCIS (Netherlands)

    Fu, X.; Riesebos, L.; Lao, L.; Garcia Almudever, C.; Sebastiano, F.; Versluis, R.; Charbon, E.; Bertels, K.

    2016-01-01

    In this paper, we present a high level view of the heterogeneous quantum computer architecture as any future quantum computer will consist of both a classical and quantum computing part. The classical part is needed for error correction as well as for the execution of algorithms that contain both

  9. Period analysis at high noise level

    International Nuclear Information System (INIS)

    Kovacs, G.

    1980-01-01

    Analytical expressions are derived for the variances of some types of the periodograms due to normal-distributed noise present in the data. The equivalence of the Jurkevich and the Warner and Robinson methods is proved. The optimum phase cell number of the Warner and Robinson method is given; this number depends on the data length, signal form and noise level. The results are illustrated by numerical examples. (orig.)

  10. Instrumentation of a Level-1 Track Trigger in the ATLAS detector for the High Luminosity LHC

    CERN Document Server

    Boisvert, V; The ATLAS collaboration

    2012-01-01

    One of the main challenges in particle physics experiments at hadron colliders is to build detector systems that can take advantage of the future luminosity increase that will take place during the next decade. More than 200 simultaneous collisions will be recorded in a single event which will make the task to extract the interesting physics signatures harder than ever before. Not all events can be recorded hence a fast trigger system is required to select events that will be stored for further analysis. In the ATLAS experiment at the Large Hadron Collider (LHC) two different architectures for accommodating a level-1 track trigger are being investigated. The tracker has more readout channels than can be readout in time for the trigger decision. Both architectures aim for a data reduction of 10-100 in order to make readout of data possible in time for a level-1 trigger decision. In the first architecture the data reduction is achieved by reading out only parts of the detector seeded by a high rate pre-trigger ...

  11. Disposal of high-level radioactive wastes

    Energy Technology Data Exchange (ETDEWEB)

    Costello, J M [Australian Atomic Energy Commission Research Establishment, Lucas Heights

    1982-03-01

    The aims and options for the management and disposal of highly radioactive wastes contained in spent fuel from the generation of nuclear power are outlined. The status of developments in reprocessing, waste solidification and geologic burial in major countries is reviewed. Some generic assessments of the potential radiological impacts from geologic repositories are discussed, and a perspective is suggested on risks from radiation.

  12. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...... of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given...... a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java....

  13. Experimental demonstration of OpenFlow-enabled media ecosystem architecture for high-end applications over metro and core networks.

    Science.gov (United States)

    Ntofon, Okung-Dike; Channegowda, Mayur P; Efstathiou, Nikolaos; Rashidi Fard, Mehdi; Nejabati, Reza; Hunter, David K; Simeonidou, Dimitra

    2013-02-25

    In this paper, a novel Software-Defined Networking (SDN) architecture is proposed for high-end Ultra High Definition (UHD) media applications. UHD media applications require huge amounts of bandwidth that can only be met with high-capacity optical networks. In addition, there are requirements for control frameworks capable of delivering effective application performance with efficient network utilization. A novel SDN-based Controller that tightly integrates application-awareness with network control and management is proposed for such applications. An OpenFlow-enabled test-bed demonstrator is reported with performance evaluations of advanced online and offline media- and network-aware schedulers.

  14. High-lying 0+ and 3- levels in 12C

    International Nuclear Information System (INIS)

    Hanna, S.S.; Feldman, W.; Suffert, M.; Kurath, D.

    1982-01-01

    The γ decays of the levels at 17.77 and 18.36 MeV in 12 C are studied by proton capture and the assignments of (0 + ,1) and (3 - ,1), respectively, are confirmed. The very great strength of the decay of the (0 + ,1) level to the lower (1 + ,0) level at 12.71 MeV is consistent with a spin- and isospin-flip deuteronlike transition. The strong decay of the (3 - ,1) level to the lower (3 - 0,) level at 9.64 MeV is fairly typical of an analog to antianalog transition. The γ-decay widths of these levels are compared with shell-model calculations

  15. Level-2 Milestone 5588: Deliver Strategic Plan and Initial Scalability Assessment by Advanced Architecture and Portability Specialists Team

    Energy Technology Data Exchange (ETDEWEB)

    Draeger, Erik W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-30

    This report documents the fact that the work in creating a strategic plan and beginning customer engagements has been completed. The description of milestone is: The newly formed advanced architecture and portability specialists (AAPS) team will develop a strategic plan to meet the goals of 1) sharing knowledge and experience with code teams to ensure that ASC codes run well on new architectures, and 2) supplying skilled computational scientists to put the strategy into practice. The plan will be delivered to ASC management in the first quarter. By the fourth quarter, the team will identify their first customers within PEM and IC, perform an initial assessment and scalability and performance bottleneck for next-generation architectures, and embed AAPS team members with customer code teams to assist with initial portability development within standalone kernels or proxy applications.

  16. A data acquisition architecture for the SSC

    International Nuclear Information System (INIS)

    Partridge, R.

    1990-01-01

    An SSC data acquisition architecture applicable to high-p T detectors is described. The architecture is based upon a small set of design principles that were chosen to simplify communication between data acquisition elements while providing the required level of flexibility and performance. The architecture features an integrated system for data collection, event building, and communication with a large processing farm. The interface to the front end electronics system is also discussed. A set of design parameters is given for a data acquisition system that should meet the needs of high-p T detectors at the SSC

  17. Architecture and Intelligentsia

    Directory of Open Access Journals (Sweden)

    Alexander Rappaport

    2015-08-01

    Full Text Available The article observes intellectual and cultural level of architecture and its important functions in social process. Historical analysis shows constant decline of intellectual level of profession, as a reaction on radical changes in its social functions and mass scale, leading to degrading of individual critical reflection and growing dependence of architecture to political and economical bureaucracy.

  18. Architecture and Intelligentsia

    OpenAIRE

    Alexander Rappaport

    2015-01-01

    The article observes intellectual and cultural level of architecture and its important functions in social process. Historical analysis shows constant decline of intellectual level of profession, as a reaction on radical changes in its social functions and mass scale, leading to degrading of individual critical reflection and growing dependence of architecture to political and economical bureaucracy.

  19. High-order chromatin architecture shapes the landscape of chromosomal alterations in cancer

    Science.gov (United States)

    Fudenberg, Geoffrey; Getz, Gad; Meyerson, Matthew; Mirny, Leonid

    2012-02-01

    The rapid growth of cancer genome structural information provides an opportunity for a better understanding of the mutational mechanisms of genomic alterations in cancer and the forces of selection that act upon them. Here we test the evidence for two major forces, spatial chromosome structure and purifying (or negative) selection, that shape the landscape of somatic copy-number alterations (SCNAs) in cancer (Beroukhim et al, 2010). Using a maximum likelihood framework we compare SCNA maps and three-dimensional genome architecture as determined by genome-wide chromosome conformation capture (HiC) and described by the proposed fractal-globule (FG) model (Lieberman-Aiden and Van Berkum et al, 2009). This analysis provides evidence that the distribution of chromosomal alterations in cancer is spatially related to three-dimensional genomic architecture and additionally suggests that purifying selection as well as positive selection shapes the landscape of SCNAs during somatic evolution of cancer cells.

  20. High level radiation dosimetry in biomedical research

    International Nuclear Information System (INIS)

    Inada, Tetsuo

    1979-01-01

    The physical and biological dosimetries relating to cancer therapy with radiation were taken up at the first place in the late intercomparison on high LET radiation therapy in Japan-US cancer research cooperative study. The biological dosimetry, the large dose in biomedical research, the high dose rate in biomedical research and the practical dosimeters for pulsed neutrons or protons are outlined with the main development history and the characteristics which were obtained in the relating experiments. The clinical neutron facilities in the US and Japan involved in the intercomparison are presented. Concerning the experimental results of dosimeters, the relation between the R.B.E. compared with Chiba (Cyclotron in National Institute of Radiological Sciences) and the energy of deuterons or protons used for neutron production, the survival curves of three cultured cell lines derived from human cancers, after the irradiation of 250 keV X-ray, cyclotron neutrons of about 13 MeV and Van de Graaff neutrons of about 2 MeV, the hatchability of dry Artemia eggs at the several depths in an absorber stack irradiated by 60 MeV proton beam of 40, 120 and 200 krad, the peak skin reaction of mouse legs observed at various sets of average and instantaneous dose rates, and the peak skin reaction versus three instantaneous dose rates at fixed average dose rate of 7,300 rad/min are shown. These actual data were evaluated numerically and in relation to the physical meaning from the viewpoint of the fundamental aspect of cancer therapy, comparing the Japanese measured values to the US data. The discussion record on the high dose rate effect of low LET particles on biological substances and others is added. (Nakai, Y.)

  1. Low-Level Space Optimization of an AES Implementation for a Bit-Serial Fully Pipelined Architecture

    Science.gov (United States)

    Weber, Raphael; Rettberg, Achim

    A previously developed AES (Advanced Encryption Standard) implementation is optimized and described in this paper. The special architecture for which this implementation is targeted comprises synchronous and systematic bit-serial processing without a central controlling instance. In order to shrink the design in terms of logic utilization we deeply analyzed the architecture and the AES implementation to identify the most costly logic elements. We propose to merge certain parts of the logic to achieve better area efficiency. The approach was integrated into an existing synthesis tool which we used to produce synthesizable VHDL code. For testing purposes, we simulated the generated VHDL code and ran tests on an FPGA board.

  2. Energy levels of highly ionized atoms

    International Nuclear Information System (INIS)

    Martin, W.C.

    1981-01-01

    Most of the data reviewed here were derived from spectra photographed in the wavelength range from 600 A down to about 20 A (approx. 20 to 600 eV). Measurements with uncertainties less than 0.001 A relative to appropriate standard wavelengths can be made with high-resolution diffraction-grating spectroscopy over most of the vacuum-ultraviolet region. Although this uncertainty corresponds to relative errors of 1 part per million (ppM) at 1000 A and 20 ppM at 50 A, measurements with uncertainties smaller than 0.001 A would generally require more effort at the shorter wavelengths, mainly because of the sparsity of accurate standards. Even where sufficiently numerous and accurate standards are available, the accuracy of measurements of the spectra of very high temperature plasmas is limited by Doppler broadening and, in some cases, other plasma effects. Several sources of error combine to give total estimated errors ranging from 10 to 1000 ppM for the experimental wavelengths of interest here. It will be seen, however, that with the possible exception of a few fine-structure splittings the experimental errors are small compared to the errors of the relevant theoretical calculations

  3. The High Level Vibration Test Program

    International Nuclear Information System (INIS)

    Hofmayer, C.H.; Curreri, J.R.; Park, Y.J.; Kato, W.Y.; Kawakami, S.

    1989-01-01

    As part of cooperative agreements between the United States and Japan, tests have been performed on the seismic vibration table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center (NUPEC) in Japan. The objective of the test program was to use the NUPEC vibration table to drive large diameter nuclear power piping to substantial plastic strain with an earthquake excitation and to compare the results with state-of-the-art analysis of the problem. The test model was designed by modifying the 1/2.5 scale model of the PWR primary coolant loop. Elastic and inelastic seismic response behavior of the test model was measured in a number of test runs with an increasing excitation input level up to the limit of the vibration table. In the maximum input condition, large dynamic plastic strains were obtained in the piping. Crack initiation was detected following the second maximum excitation run. The test model was subjected to a maximum acceleration well beyond what nuclear power plants are designed to withstand. This paper describes the overall plan, input motion development, test procedure, test results and comparisons with pre-test analysis. 4 refs., 16 figs., 2 tabs

  4. The High Level Vibration Test program

    International Nuclear Information System (INIS)

    Hofmayer, C.H.; Curreri, J.R.; Park, Y.J.; Kato, W.Y.; Kawakami, S.

    1990-01-01

    As part of cooperative agreements between the United States and Japan, tests have been performed on the seismic vibration table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center (NUPEC) in Japan. The objective of the test program was to use the NUPEC vibration table to drive large diameter nuclear power piping to substantial plastic strain with an earthquake excitation and to compare the results with state-of-the-art analysis of the problem. The test model was designed by modifying the 1/2.5 scale model of the pressurized water reactor primary coolant loop. Elastic and inelastic seismic response behavior of the test model was measured in a number of test runs with an increasing excitation input level up to the limit of the vibration table. In the maximum input condition, large dynamic plastic strains were obtained in the piping. Crack initiation was detected following the second maximum excitation run. The test model was subjected to a maximum acceleration well beyond what nuclear power plants are designed to withstand. This paper describes the overall plan, input motion development, test procedure, test results and comparisons with pre-test analysis

  5. Heat transfer in high-level waste management

    International Nuclear Information System (INIS)

    Dickey, B.R.; Hogg, G.W.

    1979-01-01

    Heat transfer in the storage of high-level liquid wastes, calcining of radioactive wastes, and storage of solidified wastes are discussed. Processing and storage experience at the Idaho Chemical Processing Plant are summarized for defense high-level wastes; heat transfer in power reactor high-level waste processing and storage is also discussed

  6. Managing commercial high-level radioactive waste

    International Nuclear Information System (INIS)

    1983-01-01

    The article is a summary of issues raised during US Congress deliberations on nuclear waste policy legislation. It is suggested that, if history is not to repeat itself, and the current stalemate on nuclear waste is not to continue, a comprehensive policy is needed that addresses the near-term problems of interim storage as part of an explicit and credible program for dealing with the longer term problem of developing a final isolation system. Such a policy must: 1) adequately address the concerns and win the support of all the major interested parties, and 2) adopt a conservative technical and institutional approach - one that places high priority on avoiding the problems that have repeatedly beset the program in the past. It is concluded that a broadly supported comprehensive policy would contain three major elements, each designed to address one of the key questions concerning Federal credibility: commitment in law to the goals of a comprehensive policy; credible institutional mechanisms for meeting goals; and credible measures for addressing the specific concerns of the states and the various publics. Such a policy is described in detail. (Auth.)

  7. Bumblebee pupae contain high levels of aluminium.

    Science.gov (United States)

    Exley, Christopher; Rotheray, Ellen; Goulson, David

    2015-01-01

    The causes of declines in bees and other pollinators remains an on-going debate. While recent attention has focussed upon pesticides, other environmental pollutants have largely been ignored. Aluminium is the most significant environmental contaminant of recent times and we speculated that it could be a factor in pollinator decline. Herein we have measured the content of aluminium in bumblebee pupae taken from naturally foraging colonies in the UK. Individual pupae were acid-digested in a microwave oven and their aluminium content determined using transversely heated graphite furnace atomic absorption spectrometry. Pupae were heavily contaminated with aluminium giving values between 13.4 and 193.4 μg/g dry wt. and a mean (SD) value of 51.0 (33.0) μg/g dry wt. for the 72 pupae tested. Mean aluminium content was shown to be a significant negative predictor of average pupal weight in colonies. While no other statistically significant relationships were found relating aluminium to bee or colony health, the actual content of aluminium in pupae are extremely high and demonstrate significant exposure to aluminium. Bees rely heavily on cognitive function and aluminium is a known neurotoxin with links, for example, to Alzheimer's disease in humans. The significant contamination of bumblebee pupae by aluminium raises the intriguing spectre of cognitive dysfunction playing a role in their population decline.

  8. High-resolution microwave diagnostics of architectural components by particle swarm optimization

    Science.gov (United States)

    Genovesi, Simone; Salerno, Emanuele; Monorchio, Agostino; Manara, Giuliano

    2010-05-01

    We present a very simple monostatic setup for coherent multifrequency microwave measurements, and an optimization procedure to reconstruct high-resolution permittivity profiles of layered objects from complex reflection coefficients. This system is capable of precisely locating internal inhomogeneities in dielectric bodies, and can be applied to on-site diagnosis of architectural components. While limiting the imaging possibilities to 1D permittivity profiles, the monostatic geometry has an important advantage over multistatic tomographic systems, since these are normally confined to laboratories, and on-site applications are difficult to devise. The sensor is a transmitting-receiving microwave antenna, and the complex reflection coefficients are measured at a number of discrete frequencies over the system passband by using a general-purpose vector network analyzer. A dedicated instrument could also be designed, thus realizing an unexpensive, easy-to-handle system. The profile reconstruction algorithm is based on the optimization of an objective functional that includes a data-fit term and a regularization term. The first consists in the norm of the complex vector difference between the measured data and the data computed by a forward solver from the current estimate of the profile function. The regularization term enforces a piecewise smooth model for the solution, based on two 1D interacting Markov random fields: the intensity field, which models the continuous permittivity values, and the binary line field, which accounts for the possible presence of discontinuities in the profile. The data-fit and the regularization terms are balanced through a tunable regularization coefficient. By virtue of this prior model, the final result is robust against noise, and overcomes the usual limitations in spatial resolution induced by the wavelengths of the probing radiations. Indeed, the accuracy in the location of the discontinuities is only limited by the system noise and

  9. Architecture Governance: The Importance of Architecture Governance for Achieving Operationally Responsive Ground Systems

    Science.gov (United States)

    Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik

    2011-01-01

    Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level

  10. Using Low-Level Architectural Features for Configuration InfoSec in a General-Purpose Self-Configurable System

    OpenAIRE

    Nicholas J. Macias; Peter M. Athanas

    2009-01-01

    Unique characteristics of biological systems are described, and similarities are made to certain computing architectures. The security challenges posed by these characteristics are discussed. A method of securely isolating portions of a design using introspective capabilities of a fine-grain self-configurable device is presented. Experimental results are discussed, and plans for future work are given.

  11. Design of a highly parallel board-level-interconnection with 320 Gbps capacity

    Science.gov (United States)

    Lohmann, U.; Jahns, J.; Limmer, S.; Fey, D.; Bauer, H.

    2012-01-01

    A parallel board-level interconnection design is presented consisting of 32 channels, each operating at 10 Gbps. The hardware uses available optoelectronic components (VCSEL, TIA, pin-diodes) and a combination of planarintegrated free-space optics, fiber-bundles and available MEMS-components, like the DMD™ from Texas Instruments. As a specific feature, we present a new modular inter-board interconnect, realized by 3D fiber-matrix connectors. The performance of the interconnect is evaluated with regard to optical properties and power consumption. Finally, we discuss the application of the interconnect for strongly distributed system architectures, as, for example, in high performance embedded computing systems and data centers.

  12. On the Development and Application of High Data Rate Architecture (HiDRA) in Future Space Networks

    Science.gov (United States)

    Hylton, Alan; Raible, Daniel; Clark, Gilbert

    2017-01-01

    Historically, space missions have been severely constrained by their ability to downlink the data they have collected. These constraints are a result of relatively low link rates on the spacecraft as well as limitations on the time during which data can be sent. As part of a coherent strategy to address existing limitations and get more data to the ground more quickly, the Space Communications and Navigation (SCaN) program has been developing an architecture for a future solar system Internet. The High Data Rate Architecture (HiDRA) project is designed to fit into such a future SCaN network. HiDRA's goal is to describe a general packet-based networking capability which can be used to provide assets with efficient networking capabilities while simultaneously reducing the capital costs and operational costs of developing and flying future space systems.Along these lines, this paper begins by reviewing various characteristics of modern satellite design as well as relevant characteristics of emerging technologies (such as free-space optical links capable of working at 100+ Gbps). Next, the paper describes HiDRA's design, and how the system is able to both integrate and support the operation of not only today's high-rate systems, but also the high-rate systems likely to be found in the future. This section also explores both existing and future networking technologies, such as Delay Tolerant Networking (DTN) protocol (RFC4838 citeRFC:1, RFC5050citeRFC:2), and explains how HiDRA supports them. Additionally, this section explores how HiDRA is used for scheduling data movement through both proactive and reactive link management. After this, the paper moves on to explore a reference implementation of HiDRA. This implementation is currently being realized based on a Field Programmable Gate Array (FPGA) memory and interface controller that is itself controlled by a local computer running DTN software. Next, this paper explores HiDRA's natural evolution, which includes an

  13. Mesoporous titanium dioxide (TiO2) with hierarchically 3D dendrimeric architectures: formation mechanism and highly enhanced photocatalytic activity.

    Science.gov (United States)

    Li, Xiao-Yun; Chen, Li-Hua; Rooke, Joanna Claire; Deng, Zhao; Hu, Zhi-Yi; Wang, Shao-Zhuan; Wang, Li; Li, Yu; Krief, Alain; Su, Bao-Lian

    2013-03-15

    Mesoporous TiO(2) with a hierarchically 3D dendrimeric nanostructure comprised of nanoribbon building units has been synthesized via a spontaneous self-formation process from various titanium alkoxides. These hierarchically 3D dendrimeric architectures can be obtained by a very facile, template-free method, by simply dropping a titanium butoxide precursor into methanol solution. The novel configuration of the mesoporous TiO(2) nanostructure in nanoribbon building units yields a high surface area. The calcined samples show significantly enhanced photocatalytic activity and degradation rates owing to the mesoporosity and their improved crystallinity after calcination. Furthermore, the 3D dendrimeric architectures can be preserved after phase transformation from amorphous TiO(2) to anatase or rutile, which occurs during calcination. In addition, the spontaneous self-formation process of mesoporous TiO(2) with hierarchically 3D dendrimeric architectures from the hydrolysis and condensation reaction of titanium butoxide in methanol has been followed by in situ optical microscopy (OM), revealing the secret on the formation of hierarchically 3D dendrimeric nanostructures. Moreover, mesoporous TiO(2) nanostructures with similar hierarchically 3D dendrimeric architectures can also be obtained using other titanium alkoxides. The porosities and nanostructures of the resultant products were characterized by SEM, TEM, XRD, and N(2) adsorption-desorption measurements. The present work provides a facile and reproducible method for the synthesis of novel mesoporous TiO(2) nanoarchitectures, which in turn could herald the fabrication of more efficient photocatalysts. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M.F.; Ethier, S.; Wichmann, N.

    2009-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores.

  15. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M F; Ethier, S; Wichmann, N

    2007-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores

  16. Highly Parallel Computing Architectures by using Arrays of Quantum-dot Cellular Automata (QCA): Opportunities, Challenges, and Recent Results

    Science.gov (United States)

    Fijany, Amir; Toomarian, Benny N.

    2000-01-01

    There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA

  17. Evaluation of a Candidate Trace Contaminant Control Subsystem Architecture: The High Velocity, Low Aspect Ratio (HVLA) Adsorption Process

    Science.gov (United States)

    Kayatin, Matthew J.; Perry, Jay L.

    2017-01-01

    Traditional gas-phase trace contaminant control adsorption process flow is constrained as required to maintain high contaminant single-pass adsorption efficiency. Specifically, the bed superficial velocity is controlled to limit the adsorption mass-transfer zone length relative to the physical adsorption bed; this is aided by traditional high-aspect ratio bed design. Through operation in this manner, most contaminants, including those with relatively high potential energy are readily adsorbed. A consequence of this operational approach, however, is a limited available operational flow margin. By considering a paradigm shift in adsorption architecture design and operations, in which flows of high superficial velocity are treated by low-aspect ratio sorbent beds, the range of well-adsorbed contaminants becomes limited, but the process flow is increased such that contaminant leaks or emerging contaminants of interest may be effectively controlled. To this end, the high velocity, low aspect ratio (HVLA) adsorption process architecture was demonstrated against a trace contaminant load representative of the International Space Station atmosphere. Two HVLA concept packaging designs (linear flow and radial flow) were tested. The performance of each design was evaluated and compared against computer simulation. Utilizing the HVLA process, long and sustained control of heavy organic contaminants was demonstrated.

  18. Hubs of Anticorrelation in High-Resolution Resting-State Functional Connectivity Network Architecture.

    Science.gov (United States)

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Cabanban, Romeo; Crosson, Bruce A

    2015-06-01

    A major focus of brain research recently has been to map the resting-state functional connectivity (rsFC) network architecture of the normal brain and pathology through functional magnetic resonance imaging. However, the phenomenon of anticorrelations in resting-state signals between different brain regions has not been adequately examined. The preponderance of studies on resting-state fMRI (rsFMRI) have either ignored anticorrelations in rsFC networks or adopted methods in data analysis, which have rendered anticorrelations in rsFC networks uninterpretable. The few studies that have examined anticorrelations in rsFC networks using conventional methods have found anticorrelations to be weak in strength and not very reproducible across subjects. Anticorrelations in rsFC network architecture could reflect mechanisms that subserve a number of important brain processes. In this preliminary study, we examined the properties of anticorrelated rsFC networks by systematically focusing on negative cross-correlation coefficients (CCs) among rsFMRI voxel time series across the brain with graph theory-based network analysis. A number of methods were implemented to enhance the neuronal specificity of resting-state functional connections that yield negative CCs, although at the cost of decreased sensitivity. Hubs of anticorrelation were seen in a number of cortical and subcortical brain regions. Examination of the anticorrelation maps of these hubs indicated that negative CCs in rsFC network architecture highlight a number of regulatory interactions between brain networks and regions, including reciprocal modulations, suppression, inhibition, and neurofeedback.

  19. Accuracy Test of Software Architecture Compliance Checking Tools – Test Instruction

    NARCIS (Netherlands)

    Pruijt, Leo; van der Werf, J.M.E.M.|info:eu-repo/dai/nl/36950674X; Brinkkemper., Sjaak|info:eu-repo/dai/nl/07500707X

    2015-01-01

    Software Architecture Compliance Checking (SACC) is an approach to verify conformance of implemented program code to high-level models of architectural design. Static SACC focuses on the modular software architecture and on the existence of rule violating dependencies between modules. Accurate tool

  20. Genome architecture enables local adaptation of Atlantic cod despite high connectivity

    DEFF Research Database (Denmark)

    Barth, Julia M I; Berg, Paul R; Jonsson, Per R.

    2017-01-01

    Adaptation to local conditions is a fundamental process in evolution; however, mechanisms maintaining local adaptation despite high gene flow are still poorly understood. Marine ecosystems provide a wide array of diverse habitats that frequently promote ecological adaptation even in species...... characterized by strong levels of gene flow. As one example, populations of the marine fish Atlantic cod (Gadus morhua) are highly connected due to immense dispersal capabilities but nevertheless show local adaptation in several key traits. By combining population genomic analyses based on 12K single......-nucleotide polymorphisms with larval dispersal patterns inferred using a biophysical ocean model, we show that Atlantic cod individuals residing in sheltered estuarine habitats of Scandinavian fjords mainly belong to offshore oceanic populations with considerable connectivity between these diverse ecosystems. Nevertheless...

  1. The design of a fast Level-1 track trigger for the high luminosity upgrade of ATLAS.

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00413032; The ATLAS collaboration

    2016-01-01

    The high/luminosity upgrade of the LHC will increase the rate of the proton-proton collisions by approximately a factor of 5 with respect to the initial LHC-design. The ATLAS experiment will upgrade consequently, increasing its robustness and selectivity in the expected high radiation environment. In particular, the earliest, hardware based, ATLAS trigger stage ("Level 1") will require higher rejection power, still maintaining efficient selection on many various physics signatures. The key ingredient is the possibility of extracting tracking information from the brand new full-silicon detector and use it for the process. While fascinating, this solution poses a big challenge in the choice of the architecture, due to the reduced latency available at this trigger level (few tens of micro-seconds) and the high expected working rates (order of MHz). In this paper, we review the design possibilities of such a system in a potential new trigger and readout architecture, and present the performance resulting from a d...

  2. An Intelligent Agent based Architecture for Visual Data Mining

    OpenAIRE

    Hamdi Ellouzi; Hela Ltifi; Mounir Ben Ayed

    2016-01-01

    the aim of this paper is to present an intelligent architecture of Decision Support System (DSS) based on visual data mining. This architecture applies the multi-agent technology to facilitate the design and development of DSS in complex and dynamic environment. Multi-Agent Systems add a high level of abstraction. To validate the proposed architecture, it is implemented to develop a distributed visual data mining based DSS to predict nosocomial infectionsoccurrence in intensive care units. Th...

  3. High performance 3D neutron transport on peta scale and hybrid architectures within APOLLO3 code

    International Nuclear Information System (INIS)

    Jamelot, E.; Dubois, J.; Lautard, J-J.; Calvin, C.; Baudron, A-M.

    2011-01-01

    APOLLO3 code is a common project of CEA, AREVA and EDF for the development of a new generation system for core physics analysis. We present here the parallelization of two deterministic transport solvers of APOLLO3: MINOS, a simplified 3D transport solver on structured Cartesian and hexagonal grids, and MINARET, a transport solver based on triangular meshes on 2D and prismatic ones in 3D. We used two different techniques to accelerate MINOS: a domain decomposition method, combined with an accelerated algorithm using GPU. The domain decomposition is based on the Schwarz iterative algorithm, with Robin boundary conditions to exchange information. The Robin parameters influence the convergence and we detail how we optimized the choice of these parameters. MINARET parallelization is based on angular directions calculation using explicit message passing. Fine grain parallelization is also available for each angular direction using shared memory multithreaded acceleration. Many performance results are presented on massively parallel architectures using more than 103 cores and on hybrid architectures using some tens of GPUs. This work contributes to the HPC development in reactor physics at the CEA Nuclear Energy Division. (author)

  4. New Developments in Modeling MHD Systems on High Performance Computing Architectures

    Science.gov (United States)

    Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.

    2009-04-01

    Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).

  5. Neural codes of seeing architectural styles

    OpenAIRE

    Choo, Heeyoung; Nasar, Jack L.; Nikrahei, Bardia; Walther, Dirk B.

    2017-01-01

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people′s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding sugges...

  6. Neural codes of seeing architectural styles.

    Science.gov (United States)

    Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B

    2017-01-10

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.

  7. A High-Voltage Level Tolerant Transistor Circuit

    NARCIS (Netherlands)

    Annema, Anne J.; Geelen, Godefridus Johannes Gertrudis Maria

    2001-01-01

    A high-voltage level tolerant transistor circuit, comprising a plurality of cascoded transistors, including a first transistor (T1) operatively connected to a high-voltage level node (3) and a second transistor (T2) operatively connected to a low-voltage level node (2). The first transistor (T1)

  8. 40 CFR 227.30 - High-level radioactive waste.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false High-level radioactive waste. 227.30...-level radioactive waste. High-level radioactive waste means the aqueous waste resulting from the operation of the first cycle solvent extraction system, or equivalent, and the concentrated waste from...

  9. NSLS-II High Level Application Infrastructure And Client API Design

    International Nuclear Information System (INIS)

    Shen, G.; Yang, L.; Shroff, K.

    2011-01-01

    The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. It is an open structure platform, and we try to provide a narrow API set for client application. With this narrow API, existing applications developed in different language under different architecture could be ported to our platform with small modification. This paper describes system infrastructure design, client API and system integration, and latest progress. As a new 3rd generation synchrotron light source with ultra low emittance, there are new requirements and challenges to control and manipulate the beam. A use case study and a theoretical analysis have been performed to clarify requirements and challenges to the high level applications (HLA) software environment. To satisfy those requirements and challenges, adequate system architecture of the software framework is critical for beam commissioning, study and operation. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating, plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service oriented architecture technology. The HLA is combination of tools for accelerator physicists and operators, which is same as traditional approach. In NSLS-II, they include monitoring applications and control routines. Scripting environment is very important for the later part of HLA and both parts are designed based on a common set of APIs. Physicists and operators are users of these APIs, while control system engineers and a few accelerator physicists are the developers of these APIs. With our Client/Server mode based approach, we leave how to retrieve information to the

  10. OS Friendly Microprocessor Architecture

    Science.gov (United States)

    2017-04-01

    NOTES Patrick La Fratta is now affiliated with Micron Technology, Inc., Boise, Idaho. 14. ABSTRACT We present an introduction to the patented ...Operating System Friendly Microprocessor Architecture (OSFA). The software framework to support the hardware-level security features is currently patent ...Army is assignee. OS Friendly Microprocessor Architecture. United States Patent 9122610. 2015 Sep. 2. Jungwirth P, inventor; US Army is assignee

  11. Genetic dissection of maize plant architecture with an ultra-high density bin map based on recombinant inbred lines.

    Science.gov (United States)

    Zhou, Zhiqiang; Zhang, Chaoshu; Zhou, Yu; Hao, Zhuanfang; Wang, Zhenhua; Zeng, Xing; Di, Hong; Li, Mingshun; Zhang, Degui; Yong, Hongjun; Zhang, Shihuang; Weng, Jianfeng; Li, Xinhai

    2016-03-03

    Plant architecture attributes, such as plant height, ear height, and internode number, have played an important role in the historical increases in grain yield, lodging resistance, and biomass in maize (Zea mays L.). Analyzing the genetic basis of variation in plant architecture using high density QTL mapping will be of benefit for the breeding of maize for many traits. However, the low density of molecular markers in existing genetic maps has limited the efficiency and accuracy of QTL mapping. Genotyping by sequencing (GBS) is an improved strategy for addressing a complex genome via next-generation sequencing technology. GBS has been a powerful tool for SNP discovery and high-density genetic map construction. The creation of ultra-high density genetic maps using large populations of advanced recombinant inbred lines (RILs) is an efficient way to identify QTL for complex agronomic traits. A set of 314 RILs derived from inbreds Ye478 and Qi319 were generated and subjected to GBS. A total of 137,699,000 reads with an average of 357,376 reads per individual RIL were generated, which is equivalent to approximately 0.07-fold coverage of the maize B73 RefGen_V3 genome for each individual RIL. A high-density genetic map was constructed using 4183 bin markers (100-Kb intervals with no recombination events). The total genetic distance covered by the linkage map was 1545.65 cM and the average distance between adjacent markers was 0.37 cM with a physical distance of about 0.51 Mb. Our results demonstrated a relatively high degree of collinearity between the genetic map and the B73 reference genome. The quality and accuracy of the bin map for QTL detection was verified by the mapping of a known gene, pericarp color 1 (P1), which controls the color of the cob, with a high LOD value of 80.78 on chromosome 1. Using this high-density bin map, 35 QTL affecting plant architecture, including 14 for plant height, 14 for ear height, and seven for internode number were detected

  12. A Coarse-Grained Reconfigurable Architecture with Compilation for High Performance

    Directory of Open Access Journals (Sweden)

    Lu Wan

    2012-01-01

    Full Text Available We propose a fast data relay (FDR mechanism to enhance existing CGRA (coarse-grained reconfigurable architecture. FDR can not only provide multicycle data transmission in concurrent with computations but also convert resource-demanding inter-processing-element global data accesses into local data accesses to avoid communication congestion. We also propose the supporting compiler techniques that can efficiently utilize the FDR feature to achieve higher performance for a variety of applications. Our results on FDR-based CGRA are compared with two other works in this field: ADRES and RCP. Experimental results for various multimedia applications show that FDR combined with the new compiler deliver up to 29% and 21% higher performance than ADRES and RCP, respectively.

  13. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  14. The ATLAS online High Level Trigger framework: Experience reusing offline software components in the ATLAS trigger

    International Nuclear Information System (INIS)

    Wiedenmann, Werner

    2010-01-01

    Event selection in the ATLAS High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The ATLAS High Level Trigger (HLT) framework based on the GAUDI and ATLAS ATHENA frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of ATLAS, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking periods with cosmic events and in a short period with proton beams from LHC. The contribution discusses the architectural aspects of the HLT framework, its performance and its software environment within the ATLAS computing, trigger and data flow projects. Emphasis is also put on the architectural implications for the software by the use of multi-core processors in the computing farms and the experiences gained with multi-threading and multi-process technologies.

  15. The Hi-Ring architecture for datacentre networks

    DEFF Research Database (Denmark)

    Galili, Michael; Kamchevska, Valerija; Ding, Yunhong

    2016-01-01

    This paper summarizes recent work on a hierarchical ring-based network architecture (Hi-Ring) for datacentre and short-range applications. The architecture allows leveraging benefits of optical switching technologies while maintaining a high level of connection granularity. We discuss results...

  16. SecureCore Software Architecture: Trusted Path Application (TPA) Requirements

    National Research Council Canada - National Science Library

    Clark, Paul C; Irvine, Cynthia E; Levin, Timothy E; Nguyen, Thuy D; Vidas, Timothy M

    2007-01-01

    .... A high-level architecture is described to provide such features. In addition, a usage scenario is described for a potential use of the architecture, with emphasis on the trusted path, a non-spoofable user interface to the trusted components of the system. Detailed requirements for the trusted path are provided.

  17. Evaluation of radionuclide concentrations in high-level radioactive wastes

    International Nuclear Information System (INIS)

    Fehringer, D.J.

    1985-10-01

    This report describes a possible approach for development of a numerical definition of the term ''high-level radioactive waste.'' Five wastes are identified which are recognized as being high-level wastes under current, non-numerical definitions. The constituents of these wastes are examined and the most hazardous component radionuclides are identified. This report suggests that other wastes with similar concentrations of these radionuclides could also be defined as high-level wastes. 15 refs., 9 figs., 4 tabs

  18. Analysis of Cyberbullying Sensitivity Levels of High School Students and Their Perceived Social Support Levels

    Science.gov (United States)

    Akturk, Ahmet Oguz

    2015-01-01

    Purpose: The purpose of this paper is to determine the cyberbullying sensitivity levels of high school students and their perceived social supports levels, and analyze the variables that predict cyberbullying sensitivity. In addition, whether cyberbullying sensitivity levels and social support levels differed according to gender was also…

  19. Can You Hear Architecture

    DEFF Research Database (Denmark)

    Ryhl, Camilla

    2016-01-01

    Taking an off set in the understanding of architectural quality being based on multisensory architecture, the paper aims to discuss the current acoustic discourse in inclusive design and its implications to the integration of inclusive design in architectural discourse and practice as well...... as the understanding of user needs. The paper further points to the need to elaborate and nuance the discourse much more, in order to assure inclusion to the many users living with a hearing impairment or, for other reasons, with a high degree of auditory sensitivity. Using the authors’ own research on inclusive...... design and architectural quality for people with a hearing disability and a newly conducted qualitative evaluation research in Denmark as well as architectural theories on multisensory aspects of architectural experiences, the paper uses examples of existing Nordic building cases to discuss the role...

  20. Architecture & Environment

    Science.gov (United States)

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  1. A High-Level Symbolic Representation for Intelligent Agents Across Multiple Architectures

    Science.gov (United States)

    2004-07-01

    components of Soar that map to these concepts (instantiation support, selected operator). Fik Ed" Vie Go Boolbmo .’ lookb Wind , Help 1B w ,’ F:ld 1.ý fie...AnswerSpeedRequest ((msg> isa RequestSpeedChange consider (sel’>. pmsg (msg> end 0 St=ndadd irttezf•cc fo1.1 goals . ~interface lGoal s l’n sa,,invq this goail Ys "rt

  2. High Level Architecture Distributed Space System Simulation for Simulation Interoperability Standards Organization Simulation Smackdown

    Science.gov (United States)

    Li, Zuqun

    2011-01-01

    Modeling and Simulation plays a very important role in mission design. It not only reduces design cost, but also prepares astronauts for their mission tasks. The SISO Smackdown is a simulation event that facilitates modeling and simulation in academia. The scenario of this year s Smackdown was to simulate a lunar base supply mission. The mission objective was to transfer Earth supply cargo to a lunar base supply depot and retrieve He-3 to take back to Earth. Federates for this scenario include the environment federate, Earth-Moon transfer vehicle, lunar shuttle, lunar rover, supply depot, mobile ISRU plant, exploratory hopper, and communication satellite. These federates were built by teams from all around the world, including teams from MIT, JSC, University of Alabama in Huntsville, University of Bordeaux from France, and University of Genoa from Italy. This paper focuses on the lunar shuttle federate, which was programmed by the USRP intern team from NASA JSC. The shuttle was responsible for provide transportation between lunar orbit and the lunar surface. The lunar shuttle federate was built using the NASA standard simulation package called Trick, and it was extended with HLA functions using TrickHLA. HLA functions of the lunar shuttle federate include sending and receiving interaction, publishing and subscribing attributes, and packing and unpacking fixed record data. The dynamics model of the lunar shuttle was modeled with three degrees of freedom, and the state propagation was obeying the law of two body dynamics. The descending trajectory of the lunar shuttle was designed by first defining a unique descending orbit in 2D space, and then defining a unique orbit in 3D space with the assumption of a non-rotating moon. Finally this assumption was taken away to define the initial position of the lunar shuttle so that it will start descending a second after it joins the execution. VPN software from SonicWall was used to connect federates with RTI during testing and the Smackdown event. HLA software from Pitch Technology and MAK Technology were used to edit and extend FOM and provide HLA services for federation execution. The SISO Smackdown event for 2011 was held in Boston, Massachusetts. The federation execution lasted for one hour, and the event was very successful in catching the attention of university students and faculties.

  3. Containerization of high level architecture-based simulations: A case study

    NARCIS (Netherlands)

    Berg, T. van den; Siegel, B.; Cramp, A.

    2017-01-01

    NATO and the nations use distributed simulation environments for various purposes, such as training, mission rehearsal, and decision support in acquisition processes. Consequently, modeling and simulation (M&S) has become a critical technology for the coalition and its nations. Achieving

  4. Condition-dependent transcriptome reveals high-level regulatory architecture in Bacillus subtilis

    DEFF Research Database (Denmark)

    Nicolas, Pierre; Mäder, Ulrike; Dervyn, Etienne

    2012-01-01

    Bacteria adapt to environmental stimuli by adjusting their transcriptomes in a complex manner, the full potential of which has yet to be established for any individual bacterial species. Here, we report the transcriptomes of Bacillus subtilis exposed to a wide range of environmental and nutrition...

  5. Condition-Dependent Transcriptome Reveals High-Level Regulatory Architecture in Bacillus subtilis

    NARCIS (Netherlands)

    Nicolas, Pierre; Maeder, Ulrike; Dervyn, Etienne; Rochat, Tatiana; Leduc, Aurelie; Pigeonneau, Nathalie; Bidnenko, Elena; Marchadier, Elodie; Hoebeke, Mark; Aymerich, Stephane; Becher, Doerte; Bisicchia, Paola; Botella, Eric; Delumeau, Olivier; Doherty, Geoff; Denham, Emma L.; Fogg, Mark J.; Fromion, Vincent; Goelzer, Anne; Hansen, Annette; Haertig, Elisabeth; Harwood, Colin R.; Homuth, Georg; Jarmer, Hanne; Jules, Matthieu; Klipp, Edda; Le Chat, Ludovic; Lecointe, Francois; Lewis, Peter; Liebermeister, Wolfram; March, Anika; Mars, Ruben A. T.; Nannapaneni, Priyanka; Noone, David; Pohl, Susanne; Rinn, Bernd; Ruegheimer, Frank; Sappa, Praveen K.; Samson, Franck; Schaffer, Marc; Schwikowski, Benno; Steil, Leif; Stuelke, Joerg; Wiegert, Thomas; Devine, Kevin M.; Wilkinson, Anthony J.; van Dijl, Jan Maarten; Hecker, Michael; Voelker, Uwe; Bessieres, Philippe; Noirot, Philippe

    2012-01-01

    Bacteria adapt to environmental stimuli by adjusting their transcriptomes in a complex manner, the full potential of which has yet to be established for any individual bacterial species. Here, we report the transcriptomes of Bacillus subtilis exposed to a wide range of environmental and nutritional

  6. The Notion of "High" and commitment to excellence in contemporary Russian architecture. History and project: looking into future

    Science.gov (United States)

    Volchok, Yuri

    2018-03-01

    The article covers the issue of high-rise buildings (skyscrapers) construction in Russia as the dialogue of artistic image and intellectual idea. The study shows that the professional commitment to skyscrapers erection brings to the foreground the comprehension of magnitude of the notion of "contemporary" in terms of time. It is important from methodical point of view to return to the initial meaning of the notions that provide for adding authentic meaning to the words "suprematism", "commitment", "excellence", "new", "high", and other determinants of creativity capable of going beyond "flying geese" development pattern in architectural shaping. It is well known that V.G. Shukhov's patents of 1896 were widely used in contemporary morphology of shaping. The heritage of Russian Avant Garde of 1910-20ies serves as an inspiration from methodological point of view (it is more and more evident from foreign master's creative experience). This is why it is important to return, first of all, to comprehension of the author's version of the notion "suprematism" ascending to Malevich - meaning commitment to excellence and not the "emblem" of preferences in style. The article includes the arguments providing for the capture of the 2010ies and, especially, 2015-17ies as the years of critical changes in history. Russian masters of architecture started as equals the stage of cooperative creative work with foreign architects erecting skyscrapers.

  7. Using DNase Hi-C techniques to map global and local three-dimensional genome architecture at high resolution.

    Science.gov (United States)

    Ma, Wenxiu; Ay, Ferhat; Lee, Choli; Gulsoy, Gunhan; Deng, Xinxian; Cook, Savannah; Hesson, Jennifer; Cavanaugh, Christopher; Ware, Carol B; Krumm, Anton; Shendure, Jay; Blau, C Anthony; Disteche, Christine M; Noble, William S; Duan, ZhiJun

    2018-06-01

    The folding and three-dimensional (3D) organization of chromatin in the nucleus critically impacts genome function. The past decade has witnessed rapid advances in genomic tools for delineating 3D genome architecture. Among them, chromosome conformation capture (3C)-based methods such as Hi-C are the most widely used techniques for mapping chromatin interactions. However, traditional Hi-C protocols rely on restriction enzymes (REs) to fragment chromatin and are therefore limited in resolution. We recently developed DNase Hi-C for mapping 3D genome organization, which uses DNase I for chromatin fragmentation. DNase Hi-C overcomes RE-related limitations associated with traditional Hi-C methods, leading to improved methodological resolution. Furthermore, combining this method with DNA capture technology provides a high-throughput approach (targeted DNase Hi-C) that allows for mapping fine-scale chromatin architecture at exceptionally high resolution. Hence, targeted DNase Hi-C will be valuable for delineating the physical landscapes of cis-regulatory networks that control gene expression and for characterizing phenotype-associated chromatin 3D signatures. Here, we provide a detailed description of method design and step-by-step working protocols for these two methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Ozone effects on radish (Raphanus sativus L. cv. Cherry Belle): foliar sensitivity as related to metabolite levels and cell architecture

    Energy Technology Data Exchange (ETDEWEB)

    Athanassious, R.

    1980-01-01

    The development of the first four leaves of radish (Raphanus sativus L. cv. Cherry Belle) was followed to determine the relationship between foliar sensitivity to ozone as related to selected soluble metabolites and leaf-cell arrangement. Although relatively high metabolite (protein, sugars, phenols) levels and compact cell arrangement may be advanced as factors contributing to the resistance of young leaves (L/sub 3,4/ of 21-day old plants) these same parameters do not explain the resistance of old leaves (L/sub 1,2/ of 30-day old plants). 16 references, 4 figures, 1 table.

  9. Architectures of electro-optical packet switched networks

    DEFF Research Database (Denmark)

    Berger, Michael Stubert

    2004-01-01

    and examines possible architectures for future high capacity networks with high capacity nodes. It is assumed that optics will play a key role in this scenario, and in this respect, the European IST research project DAVID aimed at proposing viable architectures for optical packet switching, exploiting the best...... from optics and electronics. An overview of the DAVID network architecture is given, focusing on the MAN and WAN architecture as well as the MPLS based network hierarchy. A statistical model of the optical slot generation process is presented and utilised to evaluate delay vs. efficiency. Furthermore...... architecture for a buffered crossbar switch is presented. The architecture uses two levels of backpressure (flow control) with different constraints on round trip time. No additional scheduling complexity is introduced, and for the actual example shown, a reduction in memory of 75% was obtained at the cost...

  10. Models for Evaluating and Improving Architecture Competence

    National Research Council Canada - National Science Library

    Bass, Len; Clements, Paul; Kazman, Rick; Klein, Mark

    2008-01-01

    ... producing high-quality architectures. This report lays out the basic concepts of software architecture competence and describes four models for explaining, measuring, and improving the architecture competence of an individual...

  11. Architecture-Level Exploration of Alternative Interconnection Schemes Targeting 3D FPGAs: A Software-Supported Methodology

    Directory of Open Access Journals (Sweden)

    Kostas Siozios

    2008-01-01

    Full Text Available In current reconfigurable architectures, the interconnection structures increasingly contribute more to the delay and power consumption. The demand for increased clock frequencies and logic density (smaller area footprint makes the problem even more important. Three-dimensional (3D architectures are able to alleviate this problem by accommodating a number of functional layers, each of which might be fabricated in different technology. However, the benefits of such integration technology have not been sufficiently explored yet. In this paper, we propose a software-supported methodology for exploring and evaluating alternative interconnection schemes for 3D FPGAs. In order to support the proposed methodology, three new CAD tools were developed (part of the 3D MEANDER Design Framework. During our exploration, we study the impact of vertical interconnection between functional layers in a number of design parameters. More specifically, the average gains in operation frequency, power consumption, and wirelength are 35%, 32%, and 13%, respectively, compared to existing 2D FPGAs with identical logic resources. Also, we achieve higher utilization ratio for the vertical interconnections compared to existing approaches by 8% for designing 3D FPGAs, leading to cheaper and more reliable devices.

  12. High-performance 3D waveguide architecture for astronomical pupil-remapping interferometry.

    Science.gov (United States)

    Norris, Barnaby; Cvetojevic, Nick; Gross, Simon; Jovanovic, Nemanja; Stewart, Paul N; Charles, Ned; Lawrence, Jon S; Withford, Michael J; Tuthill, Peter

    2014-07-28

    The detection and characterization of extra-solar planets is a major theme driving modern astronomy. Direct imaging of exoplanets allows access to a parameter space complementary to other detection methods, and potentially the characterization of exoplanetary atmospheres and surfaces. However achieving the required levels of performance with direct imaging from ground-based telescopes (subject to Earth's turbulent atmosphere) has been extremely challenging. Here we demonstrate a new generation of photonic pupil-remapping devices which build upon the Dragonfly instrument, a high contrast waveguide-based interferometer. This new generation overcomes problems caused by interference from unguided light and low throughput. Closure phase measurement scatter of only ∼ 0.2° has been achieved, with waveguide throughputs of > 70%. This translates to a maximum contrast-ratio sensitivity between star and planet at 1λ/D (1σ detection) of 5.3 × 10(-4) (with a conventional adaptive-optics system) or 1.8 × 10(-4) (with 'extreme-AO'), improving even further when random error is minimized by averaging over multiple exposures. This is an order of magnitude beyond conventional pupil-segmenting interferometry techniques (such as aperture masking), allowing a previously inaccessible part of the star to planet contrast-separation parameter space to be explored.

  13. Discovery of high-level tasks in the operating room

    NARCIS (Netherlands)

    Bouarfa, L.; Jonker, P.P.; Dankelman, J.

    2010-01-01

    Recognizing and understanding surgical high-level tasks from sensor readings is important for surgical workflow analysis. Surgical high-level task recognition is also a challenging task in ubiquitous computing because of the inherent uncertainty of sensor data and the complexity of the operating

  14. Characteristics of solidified high-level waste products

    International Nuclear Information System (INIS)

    1979-01-01

    The object of the report is to contribute to the establishment of a data bank for future preparation of codes of practice and standards for the management of high-level wastes. The work currently in progress on measuring the properties of solidified high-level wastes is being studied

  15. Process for solidifying high-level nuclear waste

    Science.gov (United States)

    Ross, Wayne A.

    1978-01-01

    The addition of a small amount of reducing agent to a mixture of a high-level radioactive waste calcine and glass frit before the mixture is melted will produce a more homogeneous glass which is leach-resistant and suitable for long-term storage of high-level radioactive waste products.

  16. The Influence of Decreased Levels of High Density Lipoprotein ...

    African Journals Online (AJOL)

    Background: Changes in lipoproteins levels in sickle cell disease (SCD) patients are well.known, but the physiological ramifications of the low levels observed have not been entirely resolved. Aim: The aim of this study is to evaluate the impact of decreased levels of high density lipoprotein cholesterol (HDL.c) on ...

  17. An OFDM System Using Polyphase Filter and DFT Architecture for Very High Data Rate Applications

    Science.gov (United States)

    Kifle, Muli; Andro, Monty; Vanderaar, Mark J.

    2001-01-01

    This paper presents a conceptual architectural design of a four-channel Orthogonal Frequency Division Multiplexing (OFDM) system with an aggregate information throughput of 622 megabits per second (Mbps). Primary emphasis is placed on the generation and detection of the composite waveform using polyphase filter and Discrete Fourier Transform (DFT) approaches to digitally stack and bandlimit the individual carriers. The four-channel approach enables the implementation of a system that can be both power and bandwidth efficient, yet enough parallelism exists to meet higher data rate goals. It also enables a DC power efficient transmitter that is suitable for on-board satellite systems, and a moderately complex receiver that is suitable for low-cost ground terminals. The major advantage of the system as compared to a single channel system is lower complexity and DC power consumption. This is because the highest sample rate is half that of the single channel system and synchronization can occur at most, depending on the synchronization technique, a quarter of the rate of a single channel system. The major disadvantage is the increased peak-to-average power ratio over the single channel system. Simulation results in a form of bit-error-rate (BER) curves are presented in this paper.

  18. Assembling high activity phosphotriesterase composites using hybrid nanoparticle peptide-DNA scaffolded architectures

    Science.gov (United States)

    Breger, Joyce C.; Buckhout-White, Susan; Walper, Scott A.; Oh, Eunkeu; Susumu, Kimihiro; Ancona, Mario G.; Medintz, Igor L.

    2017-06-01

    Nanoparticle (NP) display potentially offers a new way to both stabilize and, in many cases, enhance enzyme activity over that seen for native protein in solution. However, the large, globular and sometimes multimeric nature of many enzymes limits their ability to attach directly to the surface of NPs, especially when the latter are colloidally stabilized with bulky PEGylated ligands. Engineering extended protein linkers into the enzymes to achieve direct attachment through the PEG surface often detrimentally alters the enzymes catalytic ability. Here, we demonstrate an alternate, hybrid biomaterials-based approach to achieving directed enzyme assembly on PEGylated NPs. We self-assemble a unique architecture consisting of a central semiconductor quantum dot (QD) scaffold displaying controlled ratios of extended peptide-DNA linkers which penetrate through the PEG surface to directly couple enzymes to the QD surface. As a test case, we utilize phosphotriesterase (PTE), an enzyme of bio-defense interest due to its ability to hydrolyze organophosphate nerve agents. Moreover, this unique approach still allows PTE to maintain enhanced activity while also suggesting the ability of DNA to enhance enzyme activity in and of itself.

  19. Quantitative image analysis of vertebral body architecture - improved diagnosis in osteoporosis based on high-resolution computed tomography

    International Nuclear Information System (INIS)

    Mundinger, A.; Wiesmeier, B.; Dinkel, E.; Helwig, A.; Beck, A.; Schulte Moenting, J.

    1993-01-01

    71 women, 64 post-menopausal, were examined by single-energy quantitative computed tomography (SEQCT) and by high-resolution computed tomography (HRCT) scans through the middle of lumbar vertebral bodies. Computer-assisted image analysis of the high-resolution images assessed trabecular morphometry of the vertebral spongiosa texture. Texture parameters differed in women with and without age-reduced bone density, and in the former group also in patients with and without vertebral fractures. Discriminating parameters were the total number, diameter and variance of trabecular and intertrabecular spaces as well as the trabecular surface (p < 0.05)). A texture index based on these statistically selected morphometric parameters identified a subgroup of patients suffering from fractures due to abnormal spongiosal architecture but with a bone mineral content not indicative for increased fracture risk. The combination of osteodensitometric and trabecular morphometry improves the diagnosis of osteoporosis and may contribute to the prediction of individual fracture risk. (author)

  20. High-Throughput Phenotyping and QTL Mapping Reveals the Genetic Architecture of Maize Plant Growth1[OPEN

    Science.gov (United States)

    Huang, Chenglong; Wu, Di; Qiao, Feng; Li, Wenqiang; Duan, Lingfeng; Wang, Ke; Xiao, Yingjie; Chen, Guoxing; Liu, Qian; Yang, Wanneng

    2017-01-01

    With increasing demand for novel traits in crop breeding, the plant research community faces the challenge of quantitatively analyzing the structure and function of large numbers of plants. A clear goal of high-throughput phenotyping is to bridge the gap between genomics and phenomics. In this study, we quantified 106 traits from a maize (Zea mays) recombinant inbred line population (n = 167) across 16 developmental stages using the automatic phenotyping platform. Quantitative trait locus (QTL) mapping with a high-density genetic linkage map, including 2,496 recombinant bins, was used to uncover the genetic basis of these complex agronomic traits, and 988 QTLs have been identified for all investigated traits, including three QTL hotspots. Biomass accumulation and final yield were predicted using a combination of dissected traits in the early growth stage. These results reveal the dynamic genetic architecture of maize plant growth and enhance ideotype-based maize breeding and prediction. PMID:28153923

  1. The ATLAS High-Level Calorimeter Trigger in Run-2

    CERN Document Server

    Wiglesworth, Craig; The ATLAS collaboration

    2018-01-01

    The ATLAS Experiment uses a two-level triggering system to identify and record collision events containing a wide variety of physics signatures. It reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of 1 kHz, whilst maintaining high efficiency for interesting collision events. It is composed of an initial hardware-based level-1 trigger followed by a software-based high-level trigger. A central component of the high-level trigger is the calorimeter trigger. This is responsible for processing data from the electromagnetic and hadronic calorimeters in order to identify electrons, photons, taus, jets and missing transverse energy. In this talk I will present the performance of the high-level calorimeter trigger in Run-2, noting the improvements that have been made in response to the challenges of operating at high luminosity.

  2. Design requirements of communication architecture of SMART safety system

    International Nuclear Information System (INIS)

    Park, H. Y.; Kim, D. H.; Sin, Y. C.; Lee, J. Y.

    2001-01-01

    To develop the communication network architecture of safety system of SMART, the evaluation elements for reliability and performance factors are extracted from commercial networks and classified the required-level by importance. A predictable determinacy, status and fixed based architecture, separation and isolation from other systems, high reliability, verification and validation are introduced as the essential requirements of safety system communication network. Based on the suggested requirements, optical cable, star topology, synchronous transmission, point-to-point physical link, connection-oriented logical link, MAC (medium access control) with fixed allocation are selected as the design elements. The proposed architecture will be applied as basic communication network architecture of SMART safety system

  3. Predefined three tier business intelligence architecture in healthcare enterprise.

    Science.gov (United States)

    Wang, Meimei

    2013-04-01

    Business Intelligence (BI) has caused extensive concerns and widespread use in gathering, processing and analyzing data and providing enterprise users the methodology to make decisions. Different from traditional BI architecture, this paper proposes a new BI architecture, Top-Down Scalable BI architecture with defining mechanism for enterprise decision making solutions and aims at establishing a rapid, consistent, and scalable multiple applications on multiple platforms of BI mechanism. The two opposite information flows in our BI architecture offer the merits of having the high level of organizational prospects and making full use of the existing resources. We also introduced the avg-bed-waiting-time factor to evaluate hospital care capacity.

  4. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Science.gov (United States)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  5. Angio-Architectural Features of High-Grade Intracranial Dural Arteriovenous Fistulas: Correlation With Aggressive Clinical Presentation and Hemorrhagic Risk.

    Science.gov (United States)

    Della Pepa, Giuseppe Maria; Parente, Paolo; D'Argento, Francesco; Pedicelli, Alessandro; Sturiale, Carmelo Lucio; Sabatino, Giovanni; Albanese, Alessio; Puca, Alfredo; Fernandez, Eduardo; Olivi, Alessando; Marchese, Enrico

    2017-08-01

    High-grade dural arteriovenous fistulas (dAVFs) can present shunts with very different angio-architectural characteristics. Specific hemodynamic factors may affect clinical history and determine very different clinical courses. To evaluate the relationship between some venous angio-architectural features in high-grade dAVFs and clinical presentation. Specific indicators of moderate or severe venous hypertension were analyzed, such as altered configurations of the dural sinuses (by a single or a dual thrombosis), or overload of cortical vessels (restrictions of outflow, pseudophlebitic cortical vessels, and venous aneurysms). The institutional series was retrospectively reviewed (49 cases), and the pattern of venous drainage was analyzed in relationship with clinical presentation (benign/aggressive/hemorrhage). Thirty-five of 49 cases displayed cortical reflux (high-grade dAVFs). This subgroup displayed a benign presentation in 31.42% of cases, an aggressive in 31.42%, and hemorrhage in 37.14%. Our data confirm that within high-grade dAVFs, 2 distinct subpopulations exist according to severity of clinical presentation. Some indicators we examined showed correlation with aggressive nonhemorrhagic manifestations (outflow restriction and pseudophlebitic cortical vessels), while other showed a correlation with hemorrhage (dual thrombosis and venous aneurysms). Current classifications appear insufficient to identify a wide range of conditions that ultimately determine the organization of the cortical venous drainage. Intermediate degrees of venous congestion correlate better with the clinical risk than the simple definition of cortical reflux. The angiographic aspects of venous drainage presented in this study may prove useful to assess dAVF hemodynamic characteristics and identify conditions at higher clinical risk. Copyright © 2017 by the Congress of Neurological Surgeons

  6. The upgrade of the ATLAS High Level Trigger and Data Acquisition systems and their integration

    CERN Document Server

    Abreu, R; The ATLAS collaboration

    2014-01-01

    The Data Acquisition (DAQ) and High Level Trigger (HLT) systems that served the ATLAS experiment during LHC's first run are being upgraded in the first long LHC shutdown period, from 2013 to 2015. This contribution describes the elements that are vital for the new interaction between the two systems. The central architectural enhancement is the fusion of the once separate Level 2, Event Building (EB), and Event Filter steps. Through the factorization of previously disperse functionality and better exploitation of caching mechanisms, the inherent simplification carries with it an increase in performance. Flexibility to different running conditions is improved by an automatic balance of formerly separate tasks. Incremental EB is the principle of the new Data Collection, whereby the HLT farm avoids duplicate requests to the detector Read-Out System (ROS) by preserving and reusing previously obtained data. Moreover, requests are packed and fetched together to avoid redundant trips to the ROS. Anticipated EB is ac...

  7. Geomorphology, facies architecture, and high-resolution, non-marine sequence stratigraphy in avulsion deposits, Cumberland Marshes, Saskatchewan

    Science.gov (United States)

    Farrell, K. M.

    2001-02-01

    This paper demonstrates field relationships between landforms, facies, and high-resolution sequences in avulsion deposits. It defines the building blocks of a prograding avulsion sequence from a high-resolution sequence stratigraphy perspective, proposes concepts in non-marine sequence stratigraphy and flood basin evolution, and defines the continental equivalent to a parasequence. The geomorphic features investigated include a distributary channel and its levee, the Stage I crevasse splay of Smith et al. (Sedimentology, vol. 36 (1989) 1), and the local backswamp. Levees and splays have been poorly studied in the past, and three-dimensional (3D) studies are rare. In this study, stratigraphy is defined from the finest scale upward and facies are mapped in 3D. Genetically related successions are identified by defining a hierarchy of bounding surfaces. The genesis, architecture, geometry, and connectivity of facies are explored in 3D. The approach used here reveals that avulsion deposits are comparable in process, landform, facies, bounding surfaces, and scale to interdistributary bayfill, i.e. delta lobe deposits. Even a simple Stage I splay is a complex landform, composed of several geomorphic components, several facies and many depositional events. As in bayfill, an alluvial ridge forms as the feeder crevasse and its levees advance basinward through their own distributary mouth bar deposits to form a Stage I splay. This produces a shoestring-shaped concentration of disconnected sandbodies that is flanked by wings of heterolithic strata, that join beneath the terminal mouth bar. The proposed results challenge current paradigms. Defining a crevasse splay as a discrete sandbody potentially ignores 70% of the landform's volume. An individual sandbody is likely only a small part of a crevasse splay complex. The thickest sandbody is a terminal, channel associated feature, not a sheet that thins in the direction of propagation. The three stage model of splay evolution

  8. Predictors of Placement in Lower Level versus Higher Level High School Mathematics

    Science.gov (United States)

    Archbald, Doug; Farley-Ripple, Elizabeth N.

    2012-01-01

    Educators and researchers have long been interested in determinants of access to honors level and college prep courses in high school. Factors influencing access to upper level mathematics courses are particularly important because of the hierarchical and sequential nature of this subject and because students who finish high school with only lower…

  9. Stable superconducting magnet. [high current levels below critical temperature

    Science.gov (United States)

    Boom, R. W. (Inventor)

    1967-01-01

    Operation of a superconducting magnet is considered. A method is described for; (1) obtaining a relatively high current in a superconducting magnet positioned in a bath of a gas refrigerant; (2) operating a superconducting magnet at a relatively high current level without training; and (3) operating a superconducting magnet containing a plurality of turns of a niobium zirconium wire at a relatively high current level without training.

  10. Functional language and data flow architectures

    Science.gov (United States)

    Ercegovac, M. D.; Patel, D. R.; Lang, T.

    1983-01-01

    This is a tutorial article about language and architecture approaches for highly concurrent computer systems based on the functional style of programming. The discussion concentrates on the basic aspects of functional languages, and sequencing models such as data-flow, demand-driven and reduction which are essential at the machine organization level. Several examples of highly concurrent machines are described.

  11. Experiences in messaging middle-ware for high-level control applications

    International Nuclear Information System (INIS)

    Wanga, N.; Shasharina, S.; Matykiewicz, J.; Rooparani Pundaleeka

    2012-01-01

    Existing high-level applications in accelerator control and modeling systems leverage many different languages, tools and frameworks that do not inter-operate with one another. As a result, the accelerator control community is moving toward the proven Service-Oriented Architecture (SOA) approach to address the inter-operability challenges among heterogeneous high-level application modules. Such SOA approach enables developers to package various control subsystems and activities into 'services' with well-defined 'interfaces' and make leveraging heterogeneous high-level applications via flexible composition possible. Examples of such applications include presentation panel clients based on Control System Studio (CSS) and middle-layer applications such as model/data servers. This paper presents our experiences in developing a demonstrative high-level application environment using emerging messaging middle-ware standards. In particular, we utilize new features in EPICS v4 and other emerging standards such as Data Distribution Service (DDS) and Extensible Type Interface by the Object Management Group. We first briefly review examples we developed previously. We then present our current effort in integrating DDS into such a SOA environment for control systems. Specifically, we illustrate how we are integrating DDS into CSS and develop a Python DDS mapping. (authors)

  12. High performance cellular level agent-based simulation with FLAME for the GPU.

    Science.gov (United States)

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  13. Assemble: an interactive graphical tool to analyze and build RNA architectures at the 2D and 3D levels.

    Science.gov (United States)

    Jossinet, Fabrice; Ludwig, Thomas E; Westhof, Eric

    2010-08-15

    Assemble is an intuitive graphical interface to analyze, manipulate and build complex 3D RNA architectures. It provides several advanced and unique features within the framework of a semi-automated modeling process that can be performed by homology and ab initio with or without electron density maps. Those include the interactive editing of a secondary structure and a searchable, embedded library of annotated tertiary structures. Assemble helps users with performing recurrent and otherwise tedious tasks in structural RNA research. Assemble is released under an open-source license (MIT license) and is freely available at http://bioinformatics.org/assemble. It is implemented in the Java language and runs on MacOSX, Linux and Windows operating systems.

  14. User-Defined Data Distributions in High-Level Programming Languages

    Science.gov (United States)

    Diaconescu, Roxana E.; Zima, Hans P.

    2006-01-01

    One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.

  15. Knowledge and Architectural Practice

    DEFF Research Database (Denmark)

    Verbeke, Johan

    2017-01-01

    of the level of research methods and will explain that the research methods and processes in creative practice research are very similar to grounded theory which is an established research method in the social sciences. Finally, an argument will be made for a more explicit research attitude in architectural......This paper focuses on the specific knowledge residing in architectural practice. It is based on the research of 35 PhD fellows in the ADAPT-r (Architecture, Design and Art Practice Training-research) project. The ADAPT-r project innovates architectural research in combining expertise from academia...... and from practice in order to highlight and extract the specific kind of knowledge which resides and is developed in architectural practice (creative practice research). The paper will discuss three ongoing and completed PhD projects and focusses on the outcomes and their contribution to the field...

  16. MT-ADRES: multi-threading on coarse-grained reconfigurable architecture

    DEFF Research Database (Denmark)

    Wu, Kehuai; Kanstein, Andreas; Madsen, Jan

    2008-01-01

    The coarse-grained reconfigurable architecture ADRES (architecture for dynamically reconfigurable embedded systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high-ILP archi......The coarse-grained reconfigurable architecture ADRES (architecture for dynamically reconfigurable embedded systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high......-ILP architectures achieve only low parallelism when executing partially sequential code segments, which is also known as Amdahl's law, this article proposes to extend ADRES to MT-ADRES (multi-threaded ADRES) to also exploit thread-level parallelism. On MT-ADRES architectures, the array can be partitioned...

  17. High-level waste immobilization program: an overview

    International Nuclear Information System (INIS)

    Bonner, W.R.

    1979-09-01

    The High-Level Waste Immobilization Program is providing technology to allow safe, affordable immobilization and disposal of nuclear waste. Waste forms and processes are being developed on a schedule consistent with national needs for immobilization of high-level wastes stored at Savannah River, Hanford, Idaho National Engineering Laboratory, and West Valley, New York. This technology is directly applicable to high-level wastes from potential reprocessing of spent nuclear fuel. The program is removing one more obstacle previously seen as a potential restriction on the use and further development of nuclear power, and is thus meeting a critical technological need within the national objective of energy independence

  18. National high-level waste systems analysis report

    Energy Technology Data Exchange (ETDEWEB)

    Kristofferson, K.; Oholleran, T.P.; Powell, R.H.

    1995-09-01

    This report documents the assessment of budgetary impacts, constraints, and repository availability on the storage and treatment of high-level waste and on both existing and pending negotiated milestones. The impacts of the availabilities of various treatment systems on schedule and throughput at four Department of Energy sites are compared to repository readiness in order to determine the prudent application of resources. The information modeled for each of these sites is integrated with a single national model. The report suggests a high-level-waste model that offers a national perspective on all high-level waste treatment and storage systems managed by the Department of Energy.

  19. National high-level waste systems analysis report

    International Nuclear Information System (INIS)

    Kristofferson, K.; Oholleran, T.P.; Powell, R.H.

    1995-09-01

    This report documents the assessment of budgetary impacts, constraints, and repository availability on the storage and treatment of high-level waste and on both existing and pending negotiated milestones. The impacts of the availabilities of various treatment systems on schedule and throughput at four Department of Energy sites are compared to repository readiness in order to determine the prudent application of resources. The information modeled for each of these sites is integrated with a single national model. The report suggests a high-level-waste model that offers a national perspective on all high-level waste treatment and storage systems managed by the Department of Energy

  20. Functional architecture and global properties of the Corynebacterium glutamicum regulatory network: Novel insights from a dataset with a high genomic coverage.

    Science.gov (United States)

    Freyre-González, Julio A; Tauch, Andreas

    2017-09-10

    Corynebacterium glutamicum is a Gram-positive, anaerobic, rod-shaped soil bacterium able to grow on a diversity of carbon sources like sugars and organic acids. It is a biotechnological relevant organism because of its highly efficient ability to biosynthesize amino acids, such as l-glutamic acid and l-lysine. Here, we reconstructed the most complete C. glutamicum regulatory network to date and comprehensively analyzed its global organizational properties, systems-level features and functional architecture. Our analyses show the tremendous power of Abasy Atlas to study the functional organization of regulatory networks. We created two models of the C. glutamicum regulatory network: all-evidences (containing both weak and strong supported interactions, genomic coverage=73%) and strongly-supported (only accounting for strongly supported evidences, genomic coverage=71%). Using state-of-the-art methodologies, we prove that power-law behaviors truly govern the connectivity and clustering coefficient distributions. We found a non-previously reported circuit motif that we named complex feed-forward motif. We highlighted the importance of feedback loops for the functional architecture, beyond whether they are statistically over-represented or not in the network. We show that the previously reported top-down approach is inadequate to infer the hierarchy governing a regulatory network because feedback bridges different hierarchical layers, and the top-down approach disregards the presence of intermodular genes shaping the integration layer. Our findings all together further support a diamond-shaped, three-layered hierarchy exhibiting some feedback between processing and coordination layers, which is shaped by four classes of systems-level elements: global regulators, locally autonomous modules, basal machinery and intermodular genes. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. The performance of a new Geant4 Bertini intra-nuclear cascade model in high throughput computing (HTC) cluster architecture

    Energy Technology Data Exchange (ETDEWEB)

    Aatos, Heikkinen; Andi, Hektor; Veikko, Karimaki; Tomas, Linden [Helsinki Univ., Institute of Physics (Finland)

    2003-07-01

    We study the performance of a new Bertini intra-nuclear cascade model implemented in the general detector simulation tool-kit Geant4 with a High Throughput Computing (HTC) cluster architecture. A 60 node Pentium III open-Mosix cluster is used with the Mosix kernel performing automatic process load-balancing across several CPUs. The Mosix cluster consists of several computer classes equipped with Windows NT workstations that automatically boot, daily and become nodes of the Mosix cluster. The models included in our study are a Bertini intra-nuclear cascade model with excitons, consisting of a pre-equilibrium model, a nucleus explosion model, a fission model and an evaporation model. The speed and accuracy obtained for these models is presented. (authors)

  2. IAA-Ala Resistant3, an evolutionarily conserved target of miR167, mediates Arabidopsis root architecture changes during high osmotic stress

    KAUST Repository

    Kinoshita, Natsuko

    2012-09-01

    The functions of microRNAs and their target mRNAs in Arabidopsis thaliana development have been widely documented; however, roles of stress-responsive microRNAs and their targets are not as well understood. Using small RNA deep sequencing and ATH1 microarrays to profile mRNAs, we identified IAA-Ala Resistant3 (IAR3) as a new target of miR167a. As expected, IAR3 mRNA was cleaved at the miR167a complementary site and under high osmotic stress miR167a levels decreased, whereas IAR3 mRNA levels increased. IAR3 hydrolyzes an inactive form of auxin (indole-3-acetic acid [IAA]-alanine) and releases bioactive auxin (IAA), a central phytohormone for root development. In contrast with the wild type, iar3 mutants accumulated reduced IAA levels and did not display high osmotic stress-induced root architecture changes. Transgenic plants expressing a cleavage-resistant form of IAR3 mRNA accumulated high levels of IAR3 mRNAs and showed increased lateral root development compared with transgenic plants expressing wild-type IAR3. Expression of an inducible noncoding RNA to sequester miR167a by target mimicry led to an increase in IAR3 mRNA levels, further confirming the inverse relationship between the two partners. Sequence comparison revealed the miR167 target site on IAR3 mRNA is conserved in evolutionarily distant plant species. Finally, we showed that IAR3 is required for drought tolerance. © 2012 American Society of Plant Biologists. All rights reserved.

  3. IAA-Ala Resistant3, an evolutionarily conserved target of miR167, mediates Arabidopsis root architecture changes during high osmotic stress

    KAUST Repository

    Kinoshita, Natsuko; Wang, Huan; Kasahara, Hiroyuki; Liu, Jun; MacPherson, Cameron R.; Machida, Yasunori; Kamiya, Yuji; Hannah, Matthew A.; Chuaa, Nam Hai

    2012-01-01

    The functions of microRNAs and their target mRNAs in Arabidopsis thaliana development have been widely documented; however, roles of stress-responsive microRNAs and their targets are not as well understood. Using small RNA deep sequencing and ATH1 microarrays to profile mRNAs, we identified IAA-Ala Resistant3 (IAR3) as a new target of miR167a. As expected, IAR3 mRNA was cleaved at the miR167a complementary site and under high osmotic stress miR167a levels decreased, whereas IAR3 mRNA levels increased. IAR3 hydrolyzes an inactive form of auxin (indole-3-acetic acid [IAA]-alanine) and releases bioactive auxin (IAA), a central phytohormone for root development. In contrast with the wild type, iar3 mutants accumulated reduced IAA levels and did not display high osmotic stress-induced root architecture changes. Transgenic plants expressing a cleavage-resistant form of IAR3 mRNA accumulated high levels of IAR3 mRNAs and showed increased lateral root development compared with transgenic plants expressing wild-type IAR3. Expression of an inducible noncoding RNA to sequester miR167a by target mimicry led to an increase in IAR3 mRNA levels, further confirming the inverse relationship between the two partners. Sequence comparison revealed the miR167 target site on IAR3 mRNA is conserved in evolutionarily distant plant species. Finally, we showed that IAR3 is required for drought tolerance. © 2012 American Society of Plant Biologists. All rights reserved.

  4. A high-performance model for shallow-water simulations in distributed and heterogeneous architectures

    Science.gov (United States)

    Conde, Daniel; Canelas, Ricardo B.; Ferreira, Rui M. L.

    2017-04-01

    One of the most common challenges in hydrodynamic modelling is the trade off one must make between highly resolved simulations and the time required for their computation. In the particular case of urban floods, modelers are often forced to simplify the complex geometries of the problem, or to implicitly include some of its hydrodynamic effects, due to the typically very large spatial scales involved and limited computational resources. At CEris - Instituto Superior Técnico, Universidade de Lisboa - the STAV-2D shallow-water model, particularly suited for strong transient flows in complex and dynamic geometries, has been under development for the past recent years (Canelas et al., 2013 & Conde et al., 2013). The model is based on an explicit, first-order 2DH finite-volume discretization scheme for unstructured triangular meshes, in which a flux-splitting technique is paired with a reviewed Roe-Riemann solver, yielding a model applicable to discontinuous flows over time-evolving geometries. STAV-2D features solid transport in both Euleran and Lagrangian forms, with the first aiming at describing the transport of fine natural sediments and the latter aimed at large individual debris. The model has been validated with theoretical solutions and laboratory experiments (Canelas et al., 2013 & Conde et al., 2015). This work presents our most recent effort in STAV-2D: the re-design of the code in a modern Object-Oriented parallel framework for heterogeneous computations in CPUs and GPUs. The programming language of choice for this re-design was C++, due to its wide support of established and emerging parallel programming interfaces. The current implementation of STAV-2D provides two different levels of parallel granularity: inter-node and intra-node. Inter-node parallelism is achieved by distributing a simulation across a set of worker nodes, with communication between nodes being explicitly managed through MPI. At this level, the main difficulty is associated with the

  5. The ATLAS high level trigger region of interest builder

    International Nuclear Information System (INIS)

    Blair, R.; Dawson, J.; Drake, G.; Haberichter, W.; Schlereth, J.; Zhang, J.; Ermoline, Y.; Pope, B.; Aboline, M.; High Energy Physics; Michigan State Univ.

    2008-01-01

    This article describes the design, testing and production of the ATLAS Region of Interest Builder (RoIB). This device acts as an interface between the Level 1 trigger and the high level trigger (HLT) farm for the ATLAS LHC detector. It distributes all of the Level 1 data for a subset of events to a small number of (16 or less) individual commodity processors. These processors in turn provide this information to the HLT. This allows the HLT to use the Level 1 information to narrow data requests to areas of the detector where Level 1 has identified interesting objects

  6. High performance image acquisition and processing architecture for fast plant system controllers based on FPGA and GPU

    International Nuclear Information System (INIS)

    Nieto, J.; Sanz, D.; Guillén, P.; Esquembri, S.; Arcas, G. de; Ruiz, M.; Vega, J.; Castro, R.

    2016-01-01

    Highlights: • To test an image acquisition and processing system for Camera Link devices based in a FPGA, compliant with ITER fast controllers. • To move data acquired from the set NI1483-NIPXIe7966R directly to a NVIDIA GPU using NVIDIA GPUDirect RDMA technology. • To obtain a methodology to include GPUs processing in ITER Fast Plant Controllers, using EPICS integration through Nominal Device Support (NDS). - Abstract: The two dominant technologies that are being used in real time image processing are Field Programmable Gate Array (FPGA) and Graphical Processor Unit (GPU) due to their algorithm parallelization capabilities. But not much work has been done to standardize how these technologies can be integrated in data acquisition systems, where control and supervisory requirements are in place, such as ITER (International Thermonuclear Experimental Reactor). This work proposes an architecture, and a development methodology, to develop image acquisition and processing systems based on FPGAs and GPUs compliant with ITER fast controller solutions. A use case based on a Camera Link device connected to an FPGA DAQ device (National Instruments FlexRIO technology), and a NVIDIA Tesla GPU series card has been developed and tested. The architecture proposed has been designed to optimize system performance by minimizing data transfer operations and CPU intervention thanks to the use of NVIDIA GPUDirect RDMA and DMA technologies. This allows moving the data directly between the different hardware elements (FPGA DAQ-GPU-CPU) avoiding CPU intervention and therefore the use of intermediate CPU memory buffers. A special effort has been put to provide a development methodology that, maintaining the highest possible abstraction from the low level implementation details, allows obtaining solutions that conform to CODAC Core System standards by providing EPICS and Nominal Device Support.

  7. High performance image acquisition and processing architecture for fast plant system controllers based on FPGA and GPU

    Energy Technology Data Exchange (ETDEWEB)

    Nieto, J., E-mail: jnieto@sec.upm.es [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid, Crta. Valencia Km-7, Madrid 28031 (Spain); Sanz, D.; Guillén, P.; Esquembri, S.; Arcas, G. de; Ruiz, M. [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid, Crta. Valencia Km-7, Madrid 28031 (Spain); Vega, J.; Castro, R. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain)

    2016-11-15

    Highlights: • To test an image acquisition and processing system for Camera Link devices based in a FPGA, compliant with ITER fast controllers. • To move data acquired from the set NI1483-NIPXIe7966R directly to a NVIDIA GPU using NVIDIA GPUDirect RDMA technology. • To obtain a methodology to include GPUs processing in ITER Fast Plant Controllers, using EPICS integration through Nominal Device Support (NDS). - Abstract: The two dominant technologies that are being used in real time image processing are Field Programmable Gate Array (FPGA) and Graphical Processor Unit (GPU) due to their algorithm parallelization capabilities. But not much work has been done to standardize how these technologies can be integrated in data acquisition systems, where control and supervisory requirements are in place, such as ITER (International Thermonuclear Experimental Reactor). This work proposes an architecture, and a development methodology, to develop image acquisition and processing systems based on FPGAs and GPUs compliant with ITER fast controller solutions. A use case based on a Camera Link device connected to an FPGA DAQ device (National Instruments FlexRIO technology), and a NVIDIA Tesla GPU series card has been developed and tested. The architecture proposed has been designed to optimize system performance by minimizing data transfer operations and CPU intervention thanks to the use of NVIDIA GPUDirect RDMA and DMA technologies. This allows moving the data directly between the different hardware elements (FPGA DAQ-GPU-CPU) avoiding CPU intervention and therefore the use of intermediate CPU memory buffers. A special effort has been put to provide a development methodology that, maintaining the highest possible abstraction from the low level implementation details, allows obtaining solutions that conform to CODAC Core System standards by providing EPICS and Nominal Device Support.

  8. Handling and storage of conditioned high-level wastes

    International Nuclear Information System (INIS)

    1983-01-01

    This report deals with certain aspects of the management of one of the most important wastes, i.e. the handling and storage of conditioned (immobilized and packaged) high-level waste from the reprocessing of spent nuclear fuel and, although much of the material presented here is based on information concerning high-level waste from reprocessing LWR fuel, the principles, as well as many of the details involved, are applicable to all fuel types. The report provides illustrative background material on the arising and characteristics of high-level wastes and, qualitatively, their requirements for conditioning. The report introduces the principles important in conditioned high-level waste storage and describes the types of equipment and facilities, used or studied, for handling and storage of such waste. Finally, it discusses the safety and economic aspects that are considered in the design and operation of handling and storage facilities

  9. NOS CO-OPS Water Level Data, Verified, High Low

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset has verified (quality-controlled), daily, high low water level (tide) data from NOAA NOS Center for Operational Oceanographic Products and Services...

  10. Artificial Heads for High-Level Impulse Sound Measurement

    National Research Council Canada - National Science Library

    Buck, K

    1999-01-01

    If the Insertion Loss (IL) of hearing protectors has to be determined with very high impulse or continuous noise levels, the acoustic insulation of the Artificial Test Fixture has to exceed at least the Insertion Loss (IL...

  11. Technical career opportunities in high-level radioactive waste management

    International Nuclear Information System (INIS)

    1993-01-01

    Technical career opportunities in high-level radioactive waste management are briefly described in the areas of: Hydrology; geology; biological sciences; mathematics; engineering; heavy equipment operation; and skilled labor and crafts

  12. Long-term high-level waste technology program

    International Nuclear Information System (INIS)

    1980-04-01

    The Department of Energy (DOE) is conducting a comprehensive program to isolate all US nuclear wastes from the human environment. The DOE Office of Nuclear Energy - Waste (NEW) has full responsibility for managing the high-level wastes resulting from defense activities and additional responsiblity for providing the technology to manage existing commercial high-level wastes and any that may be generated in one of several alternative fuel cycles. Responsibilities of the Three Divisions of DOE-NEW are shown. This strategy document presents the research and development plan of the Division of Waste Products for long-term immobilization of the high-level radioactive wastes resulting from chemical processing of nuclear reactor fuels and targets. These high-level wastes contain more than 99% of the residual radionuclides produced in the fuels and targets during reactor operations. They include essentially all the fission products and most of the actinides that were not recovered for use

  13. Glasses used for the high level radioactive wastes storage

    International Nuclear Information System (INIS)

    Sombret, C.

    1983-06-01

    High level radioactive wastes generated by the reprocessing of spent fuels is an important concern in the conditioning of radioactive wastes. This paper deals with the status of the knowledge about glasses used for the treatment of these liquids [fr

  14. Handling and storage of conditioned high-level wastes

    International Nuclear Information System (INIS)

    Heafield, W.

    1984-01-01

    This paper deals with certain aspects of the management of one of the most important radioactive wastes arising from the nuclear fuel cycle, i.e. the handling and storage of conditioned high-level wastes. The paper is based on an IAEA report of the same title published during 1983 in the Technical Reports Series. The paper provides illustrative background material on the characteristics of high-level wastes and, qualitatively, their requirements for conditioning. The principles important in the storage of high-level wastes are reviewed in conjunction with the radiological and socio-political considerations involved. Four fundamentally different storage concepts are described with reference to published information and the safety aspects of particular storage concepts are discussed. Finally, overall conclusions are presented which confirm the availability of technology for constructing and operating conditioned high-level waste storage facilities for periods of at least several decades. (author)

  15. Development of melt compositions for sulphate bearing high level waste

    International Nuclear Information System (INIS)

    Jahagirdar, P.B.; Wattal, P.K.

    1997-09-01

    The report deals with the development and characterization of vitreous matrices for sulphate bearing high level waste. Studies were conducted in sodium borosilicate and lead borosilicate systems with the introduction of CaO, BaO, MgO etc. Lead borosilicate system was found to be compatible with sulphate bearing high level wastes. Detailed product evaluation carried on selected formulations is also described. (author)

  16. Properties and characteristics of high-level waste glass

    International Nuclear Information System (INIS)

    Ross, W.A.

    1977-01-01

    This paper has briefly reviewed many of the characteristics and properties of high-level waste glasses. From this review, it can be noted that glass has many desirable properties for solidification of high-level wastes. The most important of these include: (1) its low leach rate; (2) the ability to tolerate large changes in waste composition; (3) the tolerance of anticipated storage temperatures; (4) its low surface area even after thermal shock or impact

  17. High-Level Waste System Process Interface Description

    International Nuclear Information System (INIS)

    D'Entremont, P.D.

    1999-01-01

    The High-Level Waste System is a set of six different processes interconnected by pipelines. These processes function as one large treatment plant that receives, stores, and treats high-level wastes from various generators at SRS and converts them into forms suitable for final disposal. The three major forms are borosilicate glass, which will be eventually disposed of in a Federal Repository, Saltstone to be buried on site, and treated water effluent that is released to the environment

  18. High level waste canister emplacement and retrieval concepts study

    International Nuclear Information System (INIS)

    1975-09-01

    Several concepts are described for the interim (20 to 30 years) storage of canisters containing high level waste, cladding waste, and intermediate level-TRU wastes. It includes requirements, ground rules and assumptions for the entire storage pilot plant. Concepts are generally evaluated and the most promising are selected for additional work. Follow-on recommendations are made

  19. Operational experience with the ALICE High Level Trigger

    Science.gov (United States)

    Szostak, Artur

    2012-12-01

    The ALICE HLT is a dedicated real-time system for online event reconstruction and triggering. Its main goal is to reduce the raw data volume read from the detectors by an order of magnitude, to fit within the available data acquisition bandwidth. This is accomplished by a combination of data compression and triggering. When HLT is enabled, data is recorded only for events selected by HLT. The combination of both approaches allows for flexible data reduction strategies. Event reconstruction places a high computational load on HLT. Thus, a large dedicated computing cluster is required, comprising 248 machines, all interconnected with InfiniBand. Running a large system like HLT in production mode proves to be a challenge. During the 2010 pp and Pb-Pb data-taking period, many problems were experienced that led to a sub-optimal operational efficiency. Lessons were learned and certain crucial changes were made to the architecture and software in preparation for the 2011 Pb-Pb run, in which HLT had a vital role performing data compression for ALICE's largest detector, the TPC. An overview of the status of the HLT and experience from the 2010/2011 production runs are presented. Emphasis is given to the overall performance, showing an improved efficiency and stability in 2011 compared to 2010, attributed to the significant improvements made to the system. Further opportunities for improvement are identified and discussed.

  20. Operational experience with the ALICE High Level Trigger

    International Nuclear Information System (INIS)

    Szostak, Artur

    2012-01-01

    The ALICE HLT is a dedicated real-time system for online event reconstruction and triggering. Its main goal is to reduce the raw data volume read from the detectors by an order of magnitude, to fit within the available data acquisition bandwidth. This is accomplished by a combination of data compression and triggering. When HLT is enabled, data is recorded only for events selected by HLT. The combination of both approaches allows for flexible data reduction strategies. Event reconstruction places a high computational load on HLT. Thus, a large dedicated computing cluster is required, comprising 248 machines, all interconnected with InfiniBand. Running a large system like HLT in production mode proves to be a challenge. During the 2010 pp and Pb-Pb data-taking period, many problems were experienced that led to a sub-optimal operational efficiency. Lessons were learned and certain crucial changes were made to the architecture and software in preparation for the 2011 Pb-Pb run, in which HLT had a vital role performing data compression for ALICE's largest detector, the TPC. An overview of the status of the HLT and experience from the 2010/2011 production runs are presented. Emphasis is given to the overall performance, showing an improved efficiency and stability in 2011 compared to 2010, attributed to the significant improvements made to the system. Further opportunities for improvement are identified and discussed.

  1. Elevated level of polysaccharides in a high level UV-B tolerant cell ...

    African Journals Online (AJOL)

    Jane

    2011-04-26

    Apr 26, 2011 ... A cell line of Bupleurum scorzonerifolium Willd with high level ... mechanisms to repair UV-induced damages via repairing ... for treatment or prevention of solar radiation. ..... working as both UV-B absorbing compounds and.

  2. High Productivity Programming of Dense Linear Algebra on Heterogeneous NUMA Architectures

    KAUST Repository

    Alomairy, Rabab M.

    2013-01-01

    compromising the overall performance. Our approach fea- tures separation of concerns by abstracting the complexity of the hardware from the end users so that high productivity can be achieved. The Cholesky factorization is used as a benchmark representative

  3. Architecture of high-rise buildings as a brand of the modern Kazakhstan

    Science.gov (United States)

    Abdrassilova, Gulnara; Kozbagarova, Nina; Tuyakayeva, Ainagul

    2018-03-01

    Using practical examples article reviews urban-planning and space-planning features of design and construction of high-rise buildings in Kazakhstan conditions; methods are identified that provide for structural stability against wind and seismic loads based on innovative technical and technological solutions. Article authors stress out the fashion function of high-rise buildings in the new capital of Kazakhstan, the Astana city.

  4. A High Speed Mobile Communication System implementing Bicasting Architecture on the IP Layer

    OpenAIRE

    Yamada, Kazuhiro

    2012-01-01

    Having a broadband connection on high speed rails is something that business travelers want most. Increasing number of passengers is requesting even higher access speeds. We are proposing the Media Convergence System as an ideal communication system for future high speed mobile entities. The Media Convergence System recognizes plural wireless communication media between the ground network and each train, and then traffic is load-balanced over active media which varies according to circumstanc...

  5. Architectural Contestation

    NARCIS (Netherlands)

    Merle, J.

    2012-01-01

    This dissertation addresses the reductive reading of Georges Bataille's work done within the field of architectural criticism and theory which tends to set aside the fundamental ‘broken’ totality of Bataille's oeuvre and also to narrowly interpret it as a mere critique of architectural form,

  6. Architecture Sustainability

    NARCIS (Netherlands)

    Avgeriou, Paris; Stal, Michael; Hilliard, Rich

    2013-01-01

    Software architecture is the foundation of software system development, encompassing a system's architects' and stakeholders' strategic decisions. A special issue of IEEE Software is intended to raise awareness of architecture sustainability issues and increase interest and work in the area. The

  7. Memory architecture

    NARCIS (Netherlands)

    2012-01-01

    A memory architecture is presented. The memory architecture comprises a first memory and a second memory. The first memory has at least a bank with a first width addressable by a single address. The second memory has a plurality of banks of a second width, said banks being addressable by components

  8. High level of CA 125 due to large endometrioma.

    Science.gov (United States)

    Phupong, Vorapong; Chen, Orawan; Ultchaswadi, Pornthip

    2004-09-01

    CA 125 is a tumor-associated antigen. Its high levels are usually associated with ovarian malignancies, whereas smaller increases in the levels were associated with benign gynecologic conditions. The authors report a high level of CA 125 in a case of large ovarian endometrioma. A 45-year-old nulliparous Thai woman, presented with an increase of her abdominal girth for 7 months. Transabdominal ultrasonogram demonstrated a large ovarian cyst and multiple small leiomyoma uteri, and serum CA 125 level was 1,006 U/ml. The preoperative diagnosis was ovarian cancer with leiomyoma uteri. Exploratory laparotomy was performed. There were a large right ovarian endometrioma, small left ovarian endometrioma and multiple small leiomyoma. Total abdominal hysterectomy and bilateral salpingo-oophorectomy was performed and histopathology confirmed the diagnosis of endometrioma and leiomyoma. The serum CA 125 level declined to non-detectable at the 4th week. She was well at discharge and throughout her 4th week follow-up period Although a very high level of CA 125 is associated with a malignant process, it can also be found in benign conditions such as a large endometrioma. The case emphasizes the association of high levels of CA 125 with benign gynecologic conditions.

  9. Performance Analysis of Multiradio Transmitter with Polar or Cartesian Architectures Associated with High Efficiency Switched-Mode Power Amplifiers (invited paper

    Directory of Open Access Journals (Sweden)

    F. Robert

    2010-12-01

    Full Text Available This paper deals with wireless multi-radio transmitter architectures operating in the frequency band of 800 MHz – 6 GHz. As a consequence of the constant evolution in the communication systems, mobile transmitters must be able to operate at different frequency bands and modes according to existing standards specifications. The concept of a unique multiradio architecture is an evolution of the multistandard transceiver characterized by a parallelization of circuits for each standard. Multi-radio concept optimizes surface and power consumption. Transmitter architectures using sampling techniques and baseband ΣΔ or PWM coding of signals before their amplification appear as good candidates for multiradio transmitters for several reasons. They allow using high efficiency power amplifiers such as switched-mode PAs. They are highly flexible and easy to integrate because of their digital nature. But when the transmitter efficiency is considered, many elements have to be taken into account: signal coding efficiency, PA efficiency, RF filter. This paper investigates the interest of these architectures for a multiradio transmitter able to support existing wireless communications standards between 800 MHz and 6 GHz. It evaluates and compares the different possible architectures for WiMAX and LTE standards in terms of signal quality and transmitter power efficiency.

  10. Architectural Narratives

    DEFF Research Database (Denmark)

    Kiib, Hans

    2010-01-01

    a functional framework for these concepts, but tries increasingly to endow the main idea of the cultural project with a spatially aesthetic expression - a shift towards “experience architecture.” A great number of these projects typically recycle and reinterpret narratives related to historical buildings......In this essay, I focus on the combination of programs and the architecture of cultural projects that have emerged within the last few years. These projects are characterized as “hybrid cultural projects,” because they intend to combine experience with entertainment, play, and learning. This essay...... and architectural heritage; another group tries to embed new performative technologies in expressive architectural representation. Finally, this essay provides a theoretical framework for the analysis of the political rationales of these projects and for the architectural representation bridges the gap between...

  11. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  12. Treatment of High-Level Waste Arising from Pyrochemical Processes

    International Nuclear Information System (INIS)

    Lizin, A.A.; Kormilitsyn, M.V.; Osipenko, A.G.; Tomilin, S.V.; Lavrinovich, Yu.G.

    2013-01-01

    JSC “SSC RIAR” has been performing research and development activities in support of closed fuel cycle of fast reactor since the middle of 1960s. Fuel cycle involves fabrication and reprocessing of spent nuclear fuel (SNF) using pyrochemical methods of reprocessing in molten alkali metal chlorides. At present pyrochemical methods of SNF reprocessing in molten chlorides has reached such a level in their development that makes it possible to compare their competitiveness with classic aqueous methods. Their comparative advantage lies in high safety, compactness, high protectability as to nonproliferation of nuclear materials, and reduction of high level waste volume

  13. High-level trigger system for the LHC ALICE experiment

    CERN Document Server

    Bramm, R; Lien, J A; Lindenstruth, V; Loizides, C; Röhrich, D; Skaali, B; Steinbeck, T M; Stock, Reinhard; Ullaland, K; Vestbø, A S; Wiebalck, A

    2003-01-01

    The central detectors of the ALICE experiment at LHC will produce a data size of up to 75 MB/event at an event rate less than approximately equals 200 Hz resulting in a data rate of similar to 15 GB/s. Online processing of the data is necessary in order to select interesting (sub)events ("High Level Trigger"), or to compress data efficiently by modeling techniques. Processing this data requires a massive parallel computing system (High Level Trigger System). The system will consist of a farm of clustered SMP-nodes based on off- the-shelf PCs connected with a high bandwidth low latency network.

  14. SCinet Architecture: Featured at the International Conference for High Performance Computing,Networking, Storage and Analysis 2016

    Energy Technology Data Exchange (ETDEWEB)

    Lyonnais, Marc; Smith, Matt; Mace, Kate P.

    2017-02-06

    SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design and deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.

  15. Architectural switches in plant thylakoid membranes.

    Science.gov (United States)

    Kirchhoff, Helmut

    2013-10-01

    Recent progress in elucidating the structure of higher plants photosynthetic membranes provides a wealth of information. It allows generation of architectural models that reveal well-organized and complex arrangements not only on whole membrane level, but also on the supramolecular level. These arrangements are not static but highly responsive to the environment. Knowledge about the interdependency between dynamic structural features of the photosynthetic machinery and the functionality of energy conversion is central to understanding the plasticity of photosynthesis in an ever-changing environment. This review summarizes the architectural switches that are realized in thylakoid membranes of green plants.

  16. High-rise construction as a method for architectural development of megapolises

    Science.gov (United States)

    Kankhva, Vadim

    2018-03-01

    The article analyzes the current state of urban development in Moscow, there are revealed the insights into pattern of large investment projects. The regulatory framework as well as the state and the forecast of housing funds are scrutinized. A number of problems, that are related to the implementation of high-rise construction projects at all stages of the life cycles, are highlighted by the example of unique facilities, which are under construction or have already been built. Substantiation of high-rise construction by the transport hubs in megapolises is given. There are also considered main advantages of Moscow renovation project and criticism against it.

  17. Scalable optical packet switch architecture for low latency and high load computer communication networks

    NARCIS (Netherlands)

    Calabretta, N.; Di Lucente, S.; Nazarathy, Y.; Raz, O.; Dorren, H.J.S.

    2011-01-01

    High performance computer and data-centers require PetaFlop/s processing speed and Petabyte storage capacity with thousands of low-latency short link interconnections between computers nodes. Switch matrices that operate transparently in the optical domain are a potential way to efficiently

  18. Delivering stable high-quality video: an SDN architecture with DASH assisting network elements

    NARCIS (Netherlands)

    J.W.M. Kleinrouweler (Jan Willem); S. Cabrero Barros (Sergio); P.S. Cesar Garcia (Pablo Santiago)

    2016-01-01

    textabstractDynamic adaptive streaming over HTTP (DASH) is a simple, but effective, technology for video streaming over the Internet. It provides adaptive streaming while being highly scalable at the side of the content providers. However, the mismatch between TCP and the adaptive bursty nature of

  19. Profiling high performance dense linear algebra algorithms on multicore architectures for power and energy efficiency

    KAUST Repository

    Ltaief, Hatem; Luszczek, Piotr R.; Dongarra, Jack

    2011-01-01

    This paper presents the power profile of two high performance dense linear algebra libraries i.e., LAPACK and PLASMA. The former is based on block algorithms that use the fork-join paradigm to achieve parallel performance. The latter uses fine

  20. Implementation of High-Order Multireference Coupled-Cluster Methods on Intel Many Integrated Core Architecture.

    Science.gov (United States)

    Aprà, E; Kowalski, K

    2016-03-08

    In this paper we discuss the implementation of multireference coupled-cluster formalism with singles, doubles, and noniterative triples (MRCCSD(T)), which is capable of taking advantage of the processing power of the Intel Xeon Phi coprocessor. We discuss the integration of two levels of parallelism underlying the MRCCSD(T) implementation with computational kernels designed to offload the computationally intensive parts of the MRCCSD(T) formalism to Intel Xeon Phi coprocessors. Special attention is given to the enhancement of the parallel performance by task reordering that has improved load balancing in the noniterative part of the MRCCSD(T) calculations. We also discuss aspects regarding efficient optimization and vectorization strategies.