WorldWideScience

Sample records for vlsi processing arrays

  1. Plasma processing for VLSI

    CERN Document Server

    Einspruch, Norman G

    1984-01-01

    VLSI Electronics: Microstructure Science, Volume 8: Plasma Processing for VLSI (Very Large Scale Integration) discusses the utilization of plasmas for general semiconductor processing. It also includes expositions on advanced deposition of materials for metallization, lithographic methods that use plasmas as exposure sources and for multiple resist patterning, and device structures made possible by anisotropic etching.This volume is divided into four sections. It begins with the history of plasma processing, a discussion of some of the early developments and trends for VLSI. The second section

  2. VLSI signal processing technology

    CERN Document Server

    Swartzlander, Earl

    1994-01-01

    This book is the first in a set of forthcoming books focussed on state-of-the-art development in the VLSI Signal Processing area. It is a response to the tremendous research activities taking place in that field. These activities have been driven by two factors: the dramatic increase in demand for high speed signal processing, especially in consumer elec­ tronics, and the evolving microelectronic technologies. The available technology has always been one of the main factors in determining al­ gorithms, architectures, and design strategies to be followed. With every new technology, signal processing systems go through many changes in concepts, design methods, and implementation. The goal of this book is to introduce the reader to the main features of VLSI Signal Processing and the ongoing developments in this area. The focus of this book is on: • Current developments in Digital Signal Processing (DSP) pro­ cessors and architectures - several examples and case studies of existing DSP chips are discussed in...

  3. Design of two easily-testable VLSI array multipliers

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, J.; Shen, J.P.

    1983-01-01

    Array multipliers are well-suited to VLSI implementation because of the regularity in their iterative structure. However, most VLSI circuits are very difficult to test. This paper shows that, with appropriate cell design, array multipliers can be designed to be very easily testable. An array multiplier is called c-testable if all its adder cells can be exhaustively tested while requiring only a constant number of test patterns. The testability of two well-known array multiplier structures are studied. The conventional design of the carry-save array multipler is shown to be not c-testable. However, a modified design, using a modified adder cell, is generated and shown to be c-testable and requires only 16 test patterns. Similar results are obtained for the baugh-wooley two's complement array multiplier. A modified design of the baugh-wooley array multiplier is shown to be c-testable and requires 55 test patterns. The implementation of a practical c-testable 16*16 array multiplier is also presented. 10 references.

  4. VLSI design

    CERN Document Server

    Basu, D K

    2014-01-01

    Very Large Scale Integrated Circuits (VLSI) design has moved from costly curiosity to an everyday necessity, especially with the proliferated applications of embedded computing devices in communications, entertainment and household gadgets. As a result, more and more knowledge on various aspects of VLSI design technologies is becoming a necessity for the engineering/technology students of various disciplines. With this goal in mind the course material of this book has been designed to cover the various fundamental aspects of VLSI design, like Categorization and comparison between various technologies used for VLSI design Basic fabrication processes involved in VLSI design Design of MOS, CMOS and Bi CMOS circuits used in VLSI Structured design of VLSI Introduction to VHDL for VLSI design Automated design for placement and routing of VLSI systems VLSI testing and testability The various topics of the book have been discussed lucidly with analysis, when required, examples, figures and adequate analytical and the...

  5. VLSI design

    CERN Document Server

    Chandrasetty, Vikram Arkalgud

    2011-01-01

    This book provides insight into the practical design of VLSI circuits. It is aimed at novice VLSI designers and other enthusiasts who would like to understand VLSI design flows. Coverage includes key concepts in CMOS digital design, design of DSP and communication blocks on FPGAs, ASIC front end and physical design, and analog and mixed signal design. The approach is designed to focus on practical implementation of key elements of the VLSI design process, in order to make the topic accessible to novices. The design concepts are demonstrated using software from Mathworks, Xilinx, Mentor Graphic

  6. VLSI electronics microstructure science

    CERN Document Server

    1982-01-01

    VLSI Electronics: Microstructure Science, Volume 4 reviews trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the silicon-on-insulator for VLSI and VHSIC, X-ray lithography, and transient response of electron transport in GaAs using the Monte Carlo method. The technology and manufacturing of high-density magnetic-bubble memories, metallic superlattices, challenge of education for VLSI, and impact of VLSI on medical signal processing are also elaborated. This text likewise covers the impact of VLSI t

  7. A novel configurable VLSI architecture design of window-based image processing method

    Science.gov (United States)

    Zhao, Hui; Sang, Hongshi; Shen, Xubang

    2018-03-01

    Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure

  8. VLSI design of an RSA encryption/decryption chip using systolic array based architecture

    Science.gov (United States)

    Sun, Chi-Chia; Lin, Bor-Shing; Jan, Gene Eu; Lin, Jheng-Yi

    2016-09-01

    This article presents the VLSI design of a configurable RSA public key cryptosystem supporting the 512-bit, 1024-bit and 2048-bit based on Montgomery algorithm achieving comparable clock cycles of current relevant works but with smaller die size. We use binary method for the modular exponentiation and adopt Montgomery algorithm for the modular multiplication to simplify computational complexity, which, together with the systolic array concept for electric circuit designs effectively, lower the die size. The main architecture of the chip consists of four functional blocks, namely input/output modules, registers module, arithmetic module and control module. We applied the concept of systolic array to design the RSA encryption/decryption chip by using VHDL hardware language and verified using the TSMC/CIC 0.35 m 1P4 M technology. The die area of the 2048-bit RSA chip without the DFT is 3.9 × 3.9 mm2 (4.58 × 4.58 mm2 with DFT). Its average baud rate can reach 10.84 kbps under a 100 MHz clock.

  9. A second generation 50 Mbps VLSI level zero processing system prototype

    Science.gov (United States)

    Harris, Jonathan C.; Shi, Jeff; Speciale, Nick; Bennett, Toby

    1994-01-01

    Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies.

  10. Wavelength-encoded OCDMA system using opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  11. Wavelength-encoded OCDMA system using opto-VLSI processors

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  12. Implementation of a VLSI Level Zero Processing system utilizing the functional component approach

    Science.gov (United States)

    Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.

    1991-01-01

    A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.

  13. VLSI design

    CERN Document Server

    Einspruch, Norman G

    1986-01-01

    VLSI Electronics Microstructure Science, Volume 14: VLSI Design presents a comprehensive exposition and assessment of the developments and trends in VLSI (Very Large Scale Integration) electronics. This volume covers topics that range from microscopic aspects of materials behavior and device performance to the comprehension of VLSI in systems applications. Each article is prepared by a recognized authority. The subjects discussed in this book include VLSI processor design methodology; the RISC (Reduced Instruction Set Computer); the VLSI testing program; silicon compilers for VLSI; and special

  14. Improvement of CMOS VLSI rad tolerance by processing technics

    International Nuclear Information System (INIS)

    Guyomard, D.; Desoutter, I.

    1986-01-01

    The following study concerns the development of integrated circuits for fields requiring only relatively low radiation tolerance levels, and especially for the civil spatial district area. Process modifications constitute our basic study. They have been carried into effects. Our work and main results are reported in this paper. Well known 2.5 and 3 μm CMOS technologies are under our concern. A first set of modifications enables us to double the cumulative dose tolerance of a 4 Kbit SRAM, keeping at the same time the same kind of damage. We obtain memories which tolerate radiation doses as high as 16 KRad(Si). Repetitivity of the results, linked to the quality assurance of this specific circuit, is reported here. A second set of modifications concerns the processing of gate array. In particular, the choice of the silicon substrate type, (epitaxy substrate), is under investigation. On the other hand, a complete study of a test vehicule allows us to accurately measure the rad tolerance of various components of the Cell library [fr

  15. New domain for image analysis: VLSI circuits testing, with Romuald, specialized in parallel image processing

    Energy Technology Data Exchange (ETDEWEB)

    Rubat Du Merac, C; Jutier, P; Laurent, J; Courtois, B

    1983-07-01

    This paper describes some aspects of specifying, designing and evaluating a specialized machine, Romuald, for the capture, coding, and processing of video and scanning electron microscope (SEM) pictures. First the authors present the functional organization of the process unit of romuald and its hardware, giving details of its behaviour. Then they study the capture and display unit which, thanks to its flexibility, enables SEM images coding. Finally, they describe an application which is now being developed in their laboratory: testing VLSI circuits with new methods: sem+voltage contrast and image processing. 15 references.

  16. Sensor array signal processing

    CERN Document Server

    Naidu, Prabhakar S

    2009-01-01

    Chapter One: An Overview of Wavefields 1.1 Types of Wavefields and the Governing Equations 1.2 Wavefield in open space 1.3 Wavefield in bounded space 1.4 Stochastic wavefield 1.5 Multipath propagation 1.6 Propagation through random medium 1.7 ExercisesChapter Two: Sensor Array Systems 2.1 Uniform linear array (ULA) 2.2 Planar array 2.3 Distributed sensor array 2.4 Broadband sensor array 2.5 Source and sensor arrays 2.6 Multi-component sensor array2.7 ExercisesChapter Three: Frequency Wavenumber Processing 3.1 Digital filters in the w-k domain 3.2 Mapping of 1D into 2D filters 3.3 Multichannel Wiener filters 3.4 Wiener filters for ULA and UCA 3.5 Predictive noise cancellation 3.6 Exercises Chapter Four: Source Localization: Frequency Wavenumber Spectrum4.1 Frequency wavenumber spectrum 4.2 Beamformation 4.3 Capon's w-k spectrum 4.4 Maximum entropy w-k spectrum 4.5 Doppler-Azimuth Processing4.6 ExercisesChapter Five: Source Localization: Subspace Methods 5.1 Subspace methods (Narrowband) 5.2 Subspace methods (B...

  17. Prototype architecture for a VLSI level zero processing system. [Space Station Freedom

    Science.gov (United States)

    Shi, Jianfei; Grebowsky, Gerald J.; Horner, Ward P.; Chesney, James R.

    1989-01-01

    The prototype architecture and implementation of a high-speed level zero processing (LZP) system are discussed. Due to the new processing algorithm and VLSI technology, the prototype LZP system features compact size, low cost, high processing throughput, and easy maintainability and increased reliability. Though extensive control functions have been done by hardware, the programmability of processing tasks makes it possible to adapt the system to different data formats and processing requirements. It is noted that the LZP system can handle up to 8 virtual channels and 24 sources with combined data volume of 15 Gbytes per orbit. For greater demands, multiple LZP systems can be configured in parallel, each called a processing channel and assigned a subset of virtual channels. The telemetry data stream will be steered into different processing channels in accordance with their virtual channel IDs. This super system can cope with a virtually unlimited number of virtual channels and sources. In the near future, it is expected that new disk farms with data rate exceeding 150 Mbps will be available from commercial vendors due to the advance in disk drive technology.

  18. Parallel VLSI Architecture

    Science.gov (United States)

    Truong, T. K.; Reed, I.; Yeh, C.; Shao, H.

    1985-01-01

    Fermat number transformation convolutes two digital data sequences. Very-large-scale integration (VLSI) applications, such as image and radar signal processing, X-ray reconstruction, and spectrum shaping, linear convolution of two digital data sequences of arbitrary lenghts accomplished using Fermat number transform (ENT).

  19. Systolic array processing of the sequential decoding algorithm

    Science.gov (United States)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  20. UW VLSI chip tester

    Science.gov (United States)

    McKenzie, Neil

    1989-12-01

    We present a design for a low-cost, functional VLSI chip tester. It is based on the Apple MacIntosh II personal computer. It tests chips that have up to 128 pins. All pin drivers of the tester are bidirectional; each pin is programmed independently as an input or an output. The tester can test both static and dynamic chips. Rudimentary speed testing is provided. Chips are tested by executing C programs written by the user. A software library is provided for program development. Tests run under both the Mac Operating System and A/UX. The design is implemented using Xilinx Logic Cell Arrays. Price/performance tradeoffs are discussed.

  1. VLSI implementations for image communications

    CERN Document Server

    Pirsch, P

    1993-01-01

    The past few years have seen a rapid growth in image processing and image communication technologies. New video services and multimedia applications are continuously being designed. Essential for all these applications are image and video compression techniques. The purpose of this book is to report on recent advances in VLSI architectures and their implementation for video signal processing applications with emphasis on video coding for bit rate reduction. Efficient VLSI implementation for video signal processing spans a broad range of disciplines involving algorithms, architectures, circuits

  2. VLSI in medicine

    CERN Document Server

    Einspruch, Norman G

    1989-01-01

    VLSI Electronics Microstructure Science, Volume 17: VLSI in Medicine deals with the more important applications of VLSI in medical devices and instruments.This volume is comprised of 11 chapters. It begins with an article about medical electronics. The following three chapters cover diagnostic imaging, focusing on such medical devices as magnetic resonance imaging, neurometric analyzer, and ultrasound. Chapters 5, 6, and 7 present the impact of VLSI in cardiology. The electrocardiograph, implantable cardiac pacemaker, and the use of VLSI in Holter monitoring are detailed in these chapters. The

  3. VLSI electronics microstructure science

    CERN Document Server

    1981-01-01

    VLSI Electronics: Microstructure Science, Volume 3 evaluates trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the impact of VLSI on computer architectures; VLSI design and design aid requirements; and design, fabrication, and performance of CCD imagers. The approaches, potential, and progress of ultra-high-speed GaAs VLSI; computer modeling of MOSFETs; and numerical physics of micron-length and submicron-length semiconductor devices are also elaborated. This text likewise covers the optical linewi

  4. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  5. The VLSI handbook

    CERN Document Server

    Chen, Wai-Kai

    2007-01-01

    Written by a stellar international panel of expert contributors, this handbook remains the most up-to-date, reliable, and comprehensive source for real answers to practical problems. In addition to updated information in most chapters, this edition features several heavily revised and completely rewritten chapters, new chapters on such topics as CMOS fabrication and high-speed circuit design, heavily revised sections on testing of digital systems and design languages, and two entirely new sections on low-power electronics and VLSI signal processing. An updated compendium of references and othe

  6. PLA realizations for VLSI state machines

    Science.gov (United States)

    Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.

    1990-01-01

    A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.

  7. VLSI structures for track finding

    International Nuclear Information System (INIS)

    Dell'Orso, M.

    1989-01-01

    We discuss the architecture of a device based on the concept of associative memory designed to solve the track finding problem, typical of high energy physics experiments, in a time span of a few microseconds even for very high multiplicity events. This ''machine'' is implemented as a large array of custom VLSI chips. All the chips are equal and each of them stores a number of ''patterns''. All the patterns in all the chips are compared in parallel to the data coming from the detector while the detector is being read out. (orig.)

  8. Fundamentals of spherical array processing

    CERN Document Server

    Rafaely, Boaz

    2015-01-01

    This book provides a comprehensive introduction to the theory and practice of spherical microphone arrays. It is written for graduate students, researchers and engineers who work with spherical microphone arrays in a wide range of applications.   The first two chapters provide the reader with the necessary mathematical and physical background, including an introduction to the spherical Fourier transform and the formulation of plane-wave sound fields in the spherical harmonic domain. The third chapter covers the theory of spatial sampling, employed when selecting the positions of microphones to sample sound pressure functions in space. Subsequent chapters present various spherical array configurations, including the popular rigid-sphere-based configuration. Beamforming (spatial filtering) in the spherical harmonics domain, including axis-symmetric beamforming, and the performance measures of directivity index and white noise gain are introduced, and a range of optimal beamformers for spherical arrays, includi...

  9. VLSI 'smart' I/O module development

    Science.gov (United States)

    Kirk, Dan

    The developmental history, design, and operation of the MIL-STD-1553A/B discrete and serial module (DSM) for the U.S. Navy AN/AYK-14(V) avionics computer are described and illustrated with diagrams. The ongoing preplanned product improvement for the AN/AYK-14(V) includes five dual-redundant MIL-STD-1553 channels based on DSMs. The DSM is a front-end processor for transferring data to and from a common memory, sharing memory with a host processor to provide improved 'smart' input/output performance. Each DSM comprises three hardware sections: three VLSI-6000 semicustomized CMOS arrays, memory units to support the arrays, and buffers and resynchronization circuits. The DSM hardware module design, VLSI-6000 design tools, controlware and test software, and checkout procedures (using a hardware simulator) are characterized in detail.

  10. Integrating Scientific Array Processing into Standard SQL

    Science.gov (United States)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  11. Lithography requirements in complex VLSI device fabrication

    International Nuclear Information System (INIS)

    Wilson, A.D.

    1985-01-01

    Fabrication of complex very large scale integration (VLSI) circuits requires continual advances in lithography to satisfy: decreasing minimum linewidths, larger chip sizes, tighter linewidth and overlay control, increasing topography to linewidth ratios, higher yield demands, increased throughput, harsher device processing, lower lithography cost, and a larger part number set with quick turn-around time. Where optical, electron beam, x-ray, and ion beam lithography can be applied to judiciously satisfy the complex VLSI circuit fabrication requirements is discussed and those areas that are in need of major further advances are addressed. Emphasis will be placed on advanced electron beam and storage ring x-ray lithography

  12. ALMA Array Operations Group process overview

    Science.gov (United States)

    Barrios, Emilio; Alarcon, Hector

    2016-07-01

    ALMA Science operations activities in Chile are responsibility of the Department of Science Operations, which consists of three groups, the Array Operations Group (AOG), the Program Management Group (PMG) and the Data Management Group (DMG). The AOG includes the Array Operators and have the mission to provide support for science observations, operating safely and efficiently the array. The poster describes the AOG process, management and operational tools.

  13. Lithography for VLSI

    CERN Document Server

    Einspruch, Norman G

    1987-01-01

    VLSI Electronics Microstructure Science, Volume 16: Lithography for VLSI treats special topics from each branch of lithography, and also contains general discussion of some lithographic methods.This volume contains 8 chapters that discuss the various aspects of lithography. Chapters 1 and 2 are devoted to optical lithography. Chapter 3 covers electron lithography in general, and Chapter 4 discusses electron resist exposure modeling. Chapter 5 presents the fundamentals of ion-beam lithography. Mask/wafer alignment for x-ray proximity printing and for optical lithography is tackled in Chapter 6.

  14. The test of VLSI circuits

    Science.gov (United States)

    Baviere, Ph.

    Tests which have proven effective for evaluating VLSI circuits for space applications are described. It is recommended that circuits be examined after each manfacturing step to gain fast feedback on inadequacies in the production system. Data from failure modes which occur during operational lifetimes of circuits also permit redefinition of the manufacturing and quality control process to eliminate the defects identified. Other tests include determination of the operational envelope of the circuits, examination of the circuit response to controlled inputs, and the performance and functional speeds of ROM and RAM memories. Finally, it is desirable that all new circuits be designed with testing in mind.

  15. Principles of Adaptive Array Processing

    Science.gov (United States)

    2006-09-01

    ACE with and without tapering (homogeneous case). These analytical results are less suited to predict the detection performance of a real system ...Nickel: Adaptive Beamforming for Phased Array Radars. Proc. Int. Radar Symposium IRS’98 (Munich, Sept. 1998), DGON and VDE /ITG, pp. 897-906.(Reprint also...strategies for airborne radar. Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, CA, 1998, IEEE Cat.Nr. 0-7803-5148-7/98, pp. 1327-1331. [17

  16. Adaptive motion compensation in sonar array processing

    NARCIS (Netherlands)

    Groen, J.

    2006-01-01

    In recent years, sonar performance has mainly improved via a significant increase in array ap-erture, signal bandwidth and computational power. This thesis aims at improving sonar array processing techniques based on these three steps forward. In applications such as anti-submarine warfare and mine

  17. Technology computer aided design simulation for VLSI MOSFET

    CERN Document Server

    Sarkar, Chandan Kumar

    2013-01-01

    Responding to recent developments and a growing VLSI circuit manufacturing market, Technology Computer Aided Design: Simulation for VLSI MOSFET examines advanced MOSFET processes and devices through TCAD numerical simulations. The book provides a balanced summary of TCAD and MOSFET basic concepts, equations, physics, and new technologies related to TCAD and MOSFET. A firm grasp of these concepts allows for the design of better models, thus streamlining the design process, saving time and money. This book places emphasis on the importance of modeling and simulations of VLSI MOS transistors and

  18. Macrocell Builder: IP-Block-Based Design Environment for High-Throughput VLSI Dedicated Digital Signal Processing Systems

    Directory of Open Access Journals (Sweden)

    Urard Pascal

    2006-01-01

    Full Text Available We propose an efficient IP-block-based design environment for high-throughput VLSI systems. The flow generates SystemC register-transfer-level (RTL architecture, starting from a Matlab functional model described as a netlist of functional IP. The refinement model inserts automatically control structures to manage delays induced by the use of RTL IPs. It also inserts a control structure to coordinate the execution of parallel clocked IP. The delays may be managed by registers or by counters included in the control structure. The flow has been used successfully in three real-world DSP systems. The experimentations show that the approach can produce efficient RTL architecture and allows to save huge amount of time.

  19. Opto-VLSI-based reconfigurable free-space optical interconnects architecture

    DEFF Research Database (Denmark)

    Aljada, Muhsen; Alameh, Kamal; Chung, Il-Sug

    2007-01-01

    is the Opto-VLSI processor which can be driven by digital phase steering and multicasting holograms that reconfigure the optical interconnects between the input and output ports. The optical interconnects architecture is experimentally demonstrated at 2.5 Gbps using high-speed 1×3 VCSEL array and 1......×3 photoreceiver array in conjunction with two 1×4096 pixel Opto-VLSI processors. The minimisation of the crosstalk between the output ports is achieved by appropriately aligning the VCSEL and PD elements with respect to the Opto-VLSI processors and driving the latter with optimal steering phase holograms....

  20. ORGANIZATION OF GRAPHIC INFORMATION FOR VIEWING THE MULTILAYER VLSI TOPOLOGY

    Directory of Open Access Journals (Sweden)

    V. I. Romanov

    2016-01-01

    Full Text Available One of the possible ways to reorganize of graphical information describing the set of topology layers of modern VLSI. The method is directed on the use in the conditions of the bounded size of video card memory. An additional effect, providing high performance of forming multi- image layout a multi-layer topology of modern VLSI, is achieved by preloading the required texture by means of auxiliary background process.

  1. Positron emission tomographic images and expectation maximization: A VLSI architecture for multiple iterations per second

    International Nuclear Information System (INIS)

    Jones, W.F.; Byars, L.G.; Casey, M.E.

    1988-01-01

    A digital electronic architecture for parallel processing of the expectation maximization (EM) algorithm for Positron Emission tomography (PET) image reconstruction is proposed. Rapid (0.2 second) EM iterations on high resolution (256 x 256) images are supported. Arrays of two very large scale integration (VLSI) chips perform forward and back projection calculations. A description of the architecture is given, including data flow and partitioning relevant to EM and parallel processing. EM images shown are produced with software simulating the proposed hardware reconstruction algorithm. Projected cost of the system is estimated to be small in comparison to the cost of current PET scanners

  2. Applications of VLSI circuits to medical imaging

    International Nuclear Information System (INIS)

    O'Donnell, M.

    1988-01-01

    In this paper the application of advanced VLSI circuits to medical imaging is explored. The relationship of both general purpose signal processing chips and custom devices to medical imaging is discussed using examples of fabricated chips. In addition, advanced CAD tools for silicon compilation are presented. Devices built with these tools represent a possible alternative to custom devices and general purpose signal processors for the next generation of medical imaging systems

  3. Fast-prototyping of VLSI

    International Nuclear Information System (INIS)

    Saucier, G.; Read, E.

    1987-01-01

    Fast-prototyping will be a reality in the very near future if both straightforward design methods and fast manufacturing facilities are available. This book focuses, first, on the motivation for fast-prototyping. Economic aspects and market considerations are analysed by European and Japanese companies. In the second chapter, new design methods are identified, mainly for full custom circuits. Of course, silicon compilers play a key role and the introduction of artificial intelligence techniques sheds a new light on the subject. At present, fast-prototyping on gate arrays or on standard cells is the most conventional technique and the third chapter updates the state-of-the art in this area. The fourth chapter concentrates specifically on the e-beam direct-writing for submicron IC technologies. In the fifth chapter, a strategic point in fast-prototyping, namely the test problem is addressed. The design for testability and the interface to the test equipment are mandatory to fulfill the test requirement for fast-prototyping. Finally, the last chapter deals with the subject of education when many people complain about the lack of use of fast-prototyping in higher education for VLSI

  4. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    Science.gov (United States)

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  5. Nano lasers in photonic VLSI

    NARCIS (Netherlands)

    Hill, M.T.; Oei, Y.S.; Smit, M.K.

    2007-01-01

    We examine the use of micro and nano lasers to form digital photonic VLSI building blocks. Problems such as isolation and cascading of building blocks are addressed, and the potential of future nano lasers explored.

  6. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  7. Array processing for seismic surface waves

    Energy Technology Data Exchange (ETDEWEB)

    Marano, S.

    2013-07-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries.

  8. Array processing for seismic surface waves

    International Nuclear Information System (INIS)

    Marano, S.

    2013-01-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries

  9. ArrayBridge: Interweaving declarative array processing with high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Haoyuan [The Ohio State Univ., Columbus, OH (United States); Floratos, Sofoklis [The Ohio State Univ., Columbus, OH (United States); Blanas, Spyros [The Ohio State Univ., Columbus, OH (United States); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Prabhat [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Paul [Paradigm4, Inc., Waltham, MA (United States)

    2017-05-04

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.

  10. Array signal processing in the NASA Deep Space Network

    Science.gov (United States)

    Pham, Timothy T.; Jongeling, Andre P.

    2004-01-01

    In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.

  11. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  12. Processes and Materials for Flexible PV Arrays

    National Research Council Canada - National Science Library

    Gierow, Paul

    2002-01-01

    .... A parallel incentive for development of flexible PV arrays are the possibilities of synergistic advantages for certain types of spacecraft, in particular the Solar Thermal Propulsion (STP) Vehicle...

  13. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  14. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  15. Theory and applications of spherical microphone array processing

    CERN Document Server

    Jarrett, Daniel P; Naylor, Patrick A

    2017-01-01

    This book presents the signal processing algorithms that have been developed to process the signals acquired by a spherical microphone array. Spherical microphone arrays can be used to capture the sound field in three dimensions and have received significant interest from researchers and audio engineers. Algorithms for spherical array processing are different to corresponding algorithms already known in the literature of linear and planar arrays because the spherical geometry can be exploited to great beneficial effect. The authors aim to advance the field of spherical array processing by helping those new to the field to study it efficiently and from a single source, as well as by offering a way for more experienced researchers and engineers to consolidate their understanding, adding either or both of breadth and depth. The level of the presentation corresponds to graduate studies at MSc and PhD level. This book begins with a presentation of some of the essential mathematical and physical theory relevant to ...

  16. Generic nano-imprint process for fabrication of nanowire arrays

    NARCIS (Netherlands)

    Pierret, A.; Hocevar, M.; Diedenhofen, S.L.; Algra, R.E.; Vlieg, E.; Timmering, E.C.; Verschuuren, M.A.; Immink, W.G.G.; Verheijen, M.A.; Bakkers, E.P.A.M.

    2010-01-01

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2inch substrates. After lift-off organic residues remain on the surface, which induce the growth of

  17. Convolving optically addressed VLSI liquid crystal SLM

    Science.gov (United States)

    Jared, David A.; Stirk, Charles W.

    1994-03-01

    We designed, fabricated, and tested an optically addressed spatial light modulator (SLM) that performs a 3 X 3 kernel image convolution using ferroelectric liquid crystal on VLSI technology. The chip contains a 16 X 16 array of current-mirror-based convolvers with a fixed kernel for finding edges. The pixels are located on 75 micron centers, and the modulators are 20 microns on a side. The array successfully enhanced edges in illumination patterns. We developed a high-level simulation tool (CON) for analyzing the performance of convolving SLM designs. CON has a graphical interface and simulates SLM functions using SPICE-like device models. The user specifies the pixel function along with the device parameters and nonuniformities. We discovered through analysis, simulation and experiment that the operation of current-mirror-based convolver pixels is degraded at low light levels by the variation of transistor threshold voltages inherent to CMOS chips. To function acceptable, the test SLM required the input image to have an minimum irradiance of 10 (mu) W/cm2. The minimum required irradiance can be further reduced by adding a photodarlington near the photodetector or by increasing the size of the transistors used to calculate the convolution.

  18. A VLSI image processor via pseudo-mersenne transforms

    International Nuclear Information System (INIS)

    Sei, W.J.; Jagadeesh, J.M.

    1986-01-01

    The computational burden on image processing in medical fields where a large amount of information must be processed quickly and accurately has led to consideration of special-purpose image processor chip design for some time. The very large scale integration (VLSI) resolution has made it cost-effective and feasible to consider the design of special purpose chips for medical imaging fields. This paper describes a VLSI CMOS chip suitable for parallel implementation of image processing algorithms and cyclic convolutions by using Pseudo-Mersenne Number Transform (PMNT). The main advantages of the PMNT over the Fast Fourier Transform (FFT) are: (1) no multiplications are required; (2) integer arithmetic is used. The design and development of this processor, which operates on 32-point convolution or 5 x 5 window image, are described

  19. Synthesis algorithm of VLSI multipliers for ASIC

    Science.gov (United States)

    Chua, O. H.; Eldin, A. G.

    1993-01-01

    Multipliers are critical sub-blocks in ASIC design, especially for digital signal processing and communications applications. A flexible multiplier synthesis tool is developed which is capable of generating multiplier blocks for word size in the range of 4 to 256 bits. A comparison of existing multiplier algorithms is made in terms of speed, silicon area, and suitability for automated synthesis and verification of its VLSI implementation. The algorithm divides the range of supported word sizes into sub-ranges and provides each sub-range with a specific multiplier architecture for optimal speed and area. The algorithm of the synthesis tool and the multiplier architectures are presented. Circuit implementation and the automated synthesis methodology are discussed.

  20. Signal Processing in Medical Ultrasound B-mode Imaging

    International Nuclear Information System (INIS)

    Song, Tai Kyong

    2000-01-01

    Ultrasonic imaging is the most widely used modality among modern imaging device for medical diagnosis and the system performance has been improved dramatically since early 90's due to the rapid advances in DSP performance and VLSI technology that made it possible to employ more sophisticated algorithms. This paper describes 'main stream' digital signal processing functions along with the associated implementation considerations in modern medical ultrasound imaging systems. Topics covered include signal processing methods for resolution improvement, ultrasound imaging system architectures, roles and necessity of the applications of DSP and VLSI technology in the development of the medical ultrasound imaging systems, and array signal processing techniques for ultrasound focusing

  1. Surface and interface effects in VLSI

    CERN Document Server

    Einspruch, Norman G

    1985-01-01

    VLSI Electronics Microstructure Science, Volume 10: Surface and Interface Effects in VLSI provides the advances made in the science of semiconductor surface and interface as they relate to electronics. This volume aims to provide a better understanding and control of surface and interface related properties. The book begins with an introductory chapter on the intimate link between interfaces and devices. The book is then divided into two parts. The first part covers the chemical and geometric structures of prototypical VLSI interfaces. Subjects detailed include, the technologically most import

  2. VLSI Design with Alliance Free CAD Tools: an Implementation Example

    Directory of Open Access Journals (Sweden)

    Chávez-Bracamontes Ramón

    2015-07-01

    Full Text Available This paper presents the methodology used for a digital integrated circuit design that implements the communication protocol known as Serial Peripheral Interface, using the Alliance CAD System. The aim of this paper is to show how the work of VLSI design can be done by graduate and undergraduate students with minimal resources and experience. The physical design was sent to be fabricated using the CMOS AMI C5 process that features 0.5 micrometer in transistor size, sponsored by the MOSIS Educational Program. Tests were made on a platform that transfers data from inertial sensor measurements to the designed SPI chip, which in turn sends the data back on a parallel bus to a common microcontroller. The results show the efficiency of the employed methodology in VLSI design, as well as the feasibility of ICs manufacturing from school projects that have insufficient or no source of funding

  3. Embedded Processor Based Automatic Temperature Control of VLSI Chips

    Directory of Open Access Journals (Sweden)

    Narasimha Murthy Yayavaram

    2009-01-01

    Full Text Available This paper presents embedded processor based automatic temperature control of VLSI chips, using temperature sensor LM35 and ARM processor LPC2378. Due to the very high packing density, VLSI chips get heated very soon and if not cooled properly, the performance is very much affected. In the present work, the sensor which is kept very near proximity to the IC will sense the temperature and the speed of the fan arranged near to the IC is controlled based on the PWM signal generated by the ARM processor. A buzzer is also provided with the hardware, to indicate either the failure of the fan or overheating of the IC. The entire process is achieved by developing a suitable embedded C program.

  4. Application of optical processing to adaptive phased array radar

    Science.gov (United States)

    Carroll, C. W.; Vijaya Kumar, B. V. K.

    1988-01-01

    The results of the investigation of the applicability of optical processing to Adaptive Phased Array Radar (APAR) data processing will be summarized. Subjects that are covered include: (1) new iterative Fourier transform based technique to determine the array antenna weight vector such that the resulting antenna pattern has nulls at desired locations; (2) obtaining the solution of the optimal Wiener weight vector by both iterative and direct methods on two laboratory Optical Linear Algebra Processing (OLAP) systems; and (3) an investigation of the effects of errors present in OLAP systems on the solution vectors.

  5. Experimental investigation of the ribbon-array ablation process

    International Nuclear Information System (INIS)

    Li Zhenghong; Xu Rongkun; Chu Yanyun; Yang Jianlun; Xu Zeping; Ye Fan; Chen Faxin; Xue Feibiao; Ning Jiamin; Qin Yi; Meng Shijian; Hu Qingyuan; Si Fenni; Feng Jinghua; Zhang Faqiang; Chen Jinchuan; Li Linbo; Chen Dingyang; Ding Ning; Zhou Xiuwen

    2013-01-01

    Ablation processes of ribbon-array loads, as well as wire-array loads for comparison, were investigated on Qiangguang-1 accelerator. The ultraviolet framing images indicate that the ribbon-array loads have stable passages of currents, which produce axially uniform ablated plasma. The end-on x-ray framing camera observed the azimuthally modulated distribution of the early ablated ribbon-array plasma and the shrink process of the x-ray radiation region. Magnetic probes measured the total and precursor currents of ribbon-array and wire-array loads, and there exists no evident difference between the precursor currents of the two types of loads. The proportion of the precursor current to the total current is 15% to 20%, and the start time of the precursor current is about 25 ns later than that of the total current. The melting time of the load material is about 16 ns, when the inward drift velocity of the ablated plasma is taken to be 1.5 × 10 7 cm/s.

  6. Removing Background Noise with Phased Array Signal Processing

    Science.gov (United States)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  7. VLSI scaling methods and low power CMOS buffer circuit

    International Nuclear Information System (INIS)

    Sharma Vijay Kumar; Pattanaik Manisha

    2013-01-01

    Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)

  8. VLSI Technology for Cognitive Radio

    Science.gov (United States)

    VIJAYALAKSHMI, B.; SIDDAIAH, P.

    2017-08-01

    One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.

  9. Generic nano-imprint process for fabrication of nanowire arrays

    Energy Technology Data Exchange (ETDEWEB)

    Pierret, Aurelie; Hocevar, Moira; Algra, Rienk E; Timmering, Eugene C; Verschuuren, Marc A; Immink, George W G; Verheijen, Marcel A; Bakkers, Erik P A M [Philips Research Laboratories Eindhoven, High Tech Campus 11, 5656 AE Eindhoven (Netherlands); Diedenhofen, Silke L [FOM Institute for Atomic and Molecular Physics c/o Philips Research Laboratories, High Tech Campus 4, 5656 AE Eindhoven (Netherlands); Vlieg, E, E-mail: e.p.a.m.bakkers@tue.nl [IMM, Solid State Chemistry, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands)

    2010-02-10

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2 inch substrates. After lift-off organic residues remain on the surface, which induce the growth of additional undesired nanowires. We show that cleaning of the samples before growth with piranha solution in combination with a thermal anneal at 550 deg. C for InP and 700 deg. C for GaP results in uniform nanowire arrays with 1% variation in nanowire length, and without undesired extra nanowires. Our chemical cleaning procedure is applicable to other lithographic techniques such as e-beam lithography, and therefore represents a generic process.

  10. A Versatile Multichannel Digital Signal Processing Module for Microcalorimeter Arrays

    Science.gov (United States)

    Tan, H.; Collins, J. W.; Walby, M.; Hennig, W.; Warburton, W. K.; Grudberg, P.

    2012-06-01

    Different techniques have been developed for reading out microcalorimeter sensor arrays: individual outputs for small arrays, and time-division or frequency-division or code-division multiplexing for large arrays. Typically, raw waveform data are first read out from the arrays using one of these techniques and then stored on computer hard drives for offline optimum filtering, leading not only to requirements for large storage space but also limitations on achievable count rate. Thus, a read-out module that is capable of processing microcalorimeter signals in real time will be highly desirable. We have developed multichannel digital signal processing electronics that are capable of on-board, real time processing of microcalorimeter sensor signals from multiplexed or individual pixel arrays. It is a 3U PXI module consisting of a standardized core processor board and a set of daughter boards. Each daughter board is designed to interface a specific type of microcalorimeter array to the core processor. The combination of the standardized core plus this set of easily designed and modified daughter boards results in a versatile data acquisition module that not only can easily expand to future detector systems, but is also low cost. In this paper, we first present the core processor/daughter board architecture, and then report the performance of an 8-channel daughter board, which digitizes individual pixel outputs at 1 MSPS with 16-bit precision. We will also introduce a time-division multiplexing type daughter board, which takes in time-division multiplexing signals through fiber-optic cables and then processes the digital signals to generate energy spectra in real time.

  11. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    Science.gov (United States)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea

  12. Design and demonstration of a multitechnology FPGA for photonic information processing

    Science.gov (United States)

    Mal, Prosenjit; Hawk, Chris; Toshniwal, Kavita; Beyette, Fred R., Jr.

    2003-11-01

    We present here a novel architecture for a multi-technology field programmabler gate array (MT-FPGA). Implemented with a conventional CMOS VLSI technology the architecture is suitable for prototyping photonic information processing systems. We report here that this new FPGA architecture will enable the design of reconfigurable systems that incorporate technologies outside the traditional electronic domain.

  13. Total focusing method with correlation processing of antenna array signals

    Science.gov (United States)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  14. Model-based processing for underwater acoustic arrays

    CERN Document Server

    Sullivan, Edmund J

    2015-01-01

    This monograph presents a unified approach to model-based processing for underwater acoustic arrays. The use of physical models in passive array processing is not a new idea, but it has been used on a case-by-case basis, and as such, lacks any unifying structure. This work views all such processing methods as estimation procedures, which then can be unified by treating them all as a form of joint estimation based on a Kalman-type recursive processor, which can be recursive either in space or time, depending on the application. This is done for three reasons. First, the Kalman filter provides a natural framework for the inclusion of physical models in a processing scheme. Second, it allows poorly known model parameters to be jointly estimated along with the quantities of interest. This is important, since in certain areas of array processing already in use, such as those based on matched-field processing, the so-called mismatch problem either degrades performance or, indeed, prevents any solution at all. Third...

  15. Design of a Low-Power VLSI Macrocell for Nonlinear Adaptive Video Noise Reduction

    Directory of Open Access Journals (Sweden)

    Sergio Saponara

    2004-09-01

    Full Text Available A VLSI macrocell for edge-preserving video noise reduction is proposed in the paper. It is based on a nonlinear rational filter enhanced by a noise estimator for blind and dynamic adaptation of the filtering parameters to the input signal statistics. The VLSI filter features a modular architecture allowing the extension of both mask size and filtering directions. Both spatial and spatiotemporal algorithms are supported. Simulation results with monochrome test videos prove its efficiency for many noise distributions with PSNR improvements up to 3.8 dB with respect to a nonadaptive solution. The VLSI macrocell has been realized in a 0.18 μm CMOS technology using a standard-cells library; it allows for real-time processing of main video formats, up to 30 fps (frames per second 4CIF, with a power consumption in the order of few mW.

  16. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    Science.gov (United States)

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  17. Signal Processing for a Lunar Array: Minimizing Power Consumption

    Science.gov (United States)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  18. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  19. Physics-based signal processing algorithms for micromachined cantilever arrays

    Science.gov (United States)

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  20. Compact MOSFET models for VLSI design

    CERN Document Server

    Bhattacharyya, A B

    2009-01-01

    Practicing designers, students, and educators in the semiconductor field face an ever expanding portfolio of MOSFET models. In Compact MOSFET Models for VLSI Design , A.B. Bhattacharyya presents a unified perspective on the topic, allowing the practitioner to view and interpret device phenomena concurrently using different modeling strategies. Readers will learn to link device physics with model parameters, helping to close the gap between device understanding and its use for optimal circuit performance. Bhattacharyya also lays bare the core physical concepts that will drive the future of VLSI.

  1. Formal verification an essential toolkit for modern VLSI design

    CERN Document Server

    Seligman, Erik; Kumar, M V Achutha Kiran

    2015-01-01

    Formal Verification: An Essential Toolkit for Modern VLSI Design presents practical approaches for design and validation, with hands-on advice for working engineers integrating these techniques into their work. Building on a basic knowledge of System Verilog, this book demystifies FV and presents the practical applications that are bringing it into mainstream design and validation processes at Intel and other companies. The text prepares readers to effectively introduce FV in their organization and deploy FV techniques to increase design and validation productivity. Presents formal verific

  2. A Knowledge Based Approach to VLSI CAD

    Science.gov (United States)

    1983-09-01

    Avail-and/or Dist ISpecial L| OI. SEICURITY CLASIIrCATION OP THIS IPA.lErllm S Daene." A KNOwLEDE BASED APPROACH TO VLSI CAD’ Louis L Steinberg and...major issues lies in building up and managing the knowledge base of oesign expertise. We expect that, as with many recent expert systems, in order to

  3. Electro-optic techniques for VLSI interconnect

    Science.gov (United States)

    Neff, J. A.

    1985-03-01

    A major limitation to achieving significant speed increases in very large scale integration (VLSI) lies in the metallic interconnects. They are costly not only from the charge transport standpoint but also from capacitive loading effects. The Defense Advanced Research Projects Agency, in pursuit of the fifth generation supercomputer, is investigating alternatives to the VLSI metallic interconnects, especially the use of optical techniques to transport the information either inter or intrachip. As the on chip performance of VLSI continues to improve via the scale down of the logic elements, the problems associated with transferring data off and onto the chip become more severe. The use of optical carriers to transfer the information within the computer is very appealing from several viewpoints. Besides the potential for gigabit propagation rates, the conversion from electronics to optics conveniently provides a decoupling of the various circuits from one another. Significant gains will also be realized in reducing cross talk between the metallic routings, and the interconnects need no longer be constrained to the plane of a thin film on the VLSI chip. In addition, optics can offer an increased programming flexibility for restructuring the interconnect network.

  4. High performance VLSI telemetry data systems

    Science.gov (United States)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  5. SAR processing with stepped chirps and phased array antennas.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  6. VLSI Design of SVM-Based Seizure Detection System With On-Chip Learning Capability.

    Science.gov (United States)

    Feng, Lichen; Li, Zunchao; Wang, Yuanfa

    2018-02-01

    Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time-frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.

  7. Harnessing VLSI System Design with EDA Tools

    CERN Document Server

    Kamat, Rajanish K; Gaikwad, Pawan K; Guhilot, Hansraj

    2012-01-01

    This book explores various dimensions of EDA technologies for achieving different goals in VLSI system design. Although the scope of EDA is very broad and comprises diversified hardware and software tools to accomplish different phases of VLSI system design, such as design, layout, simulation, testability, prototyping and implementation, this book focuses only on demystifying the code, a.k.a. firmware development and its implementation with FPGAs. Since there are a variety of languages for system design, this book covers various issues related to VHDL, Verilog and System C synergized with EDA tools, using a variety of case studies such as testability, verification and power consumption. * Covers aspects of VHDL, Verilog and Handel C in one text; * Enables designers to judge the appropriateness of each EDA tool for relevant applications; * Omits discussion of design platforms and focuses on design case studies; * Uses design case studies from diversified application domains such as network on chip, hospital on...

  8. Parallel computation of nondeterministic algorithms in VLSI

    Energy Technology Data Exchange (ETDEWEB)

    Hortensius, P D

    1987-01-01

    This work examines parallel VLSI implementations of nondeterministic algorithms. It is demonstrated that conventional pseudorandom number generators are unsuitable for highly parallel applications. Efficient parallel pseudorandom sequence generation can be accomplished using certain classes of elementary one-dimensional cellular automata. The pseudorandom numbers appear in parallel on each clock cycle. Extensive study of the properties of these new pseudorandom number generators is made using standard empirical random number tests, cycle length tests, and implementation considerations. Furthermore, it is shown these particular cellular automata can form the basis of efficient VLSI architectures for computations involved in the Monte Carlo simulation of both the percolation and Ising models from statistical mechanics. Finally, a variation on a Built-In Self-Test technique based upon cellular automata is presented. These Cellular Automata-Logic-Block-Observation (CALBO) circuits improve upon conventional design for testability circuitry.

  9. Heavy ion tests on programmable VLSI

    International Nuclear Information System (INIS)

    Provost-Grellier, A.

    1989-11-01

    The radiation from space environment induces operation damages in onboard computers systems. The definition of a strategy, for the Very Large Scale Integrated Circuitry (VLSI) qualification and choice, is needed. The 'upset' phenomena is known to be the most critical integrated circuit radiation effect. The strategies for testing integrated circuits are reviewed. A method and a test device were developed and applied to space applications candidate circuits. Cyclotron, synchrotron and Californium source experiments were carried out [fr

  10. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  11. Mosaic Process for the Fabrication of an Acoustic Transducer Array

    National Research Council Canada - National Science Library

    2005-01-01

    .... Deriving a geometric shape for the array based on the established performance level. Selecting piezoceramic materials based on considerations related to the performance level and derived geometry...

  12. Blind Time-Frequency Analysis for Source Discrimination in Multisensor Array Processing

    National Research Council Canada - National Science Library

    Amin, Moeness

    1999-01-01

    .... We have clearly demonstrated, through analysis and simulations, the offerings of time-frequency distributions in solving key problems in sensor array processing, including direction finding, source...

  13. Point DCT VLSI Architecture for Emerging HEVC Standard

    Directory of Open Access Journals (Sweden)

    Ashfaq Ahmed

    2012-01-01

    Full Text Available This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4×4 up to 32×32, the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into sparse submatrices in order to reduce the multiplications. Finally, multiplications are completely eliminated using the lifting scheme. The proposed architecture sustains real-time processing of 1080P HD video codec running at 150 MHz.

  14. Signal processing for solar array monitoring, fault detection, and optimization

    CERN Document Server

    Braun, Henry; Spanias, Andreas

    2012-01-01

    Although the solar energy industry has experienced rapid growth recently, high-level management of photovoltaic (PV) arrays has remained an open problem. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in PV arrays in order to improve their management. In this book, we examine the potential role of sensing and monitoring technology in a PV context, focusing on the areas of fault detection, topology optimization, and performance evaluation/data visualization. First, several types of commonly occurring PV array faults are considered and detection algorithms are described. Next, the potential for dynamic optimization of an array's topology is discussed, with a focus on mitigation of fault conditions and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presen...

  15. Sampling phased array a new technique for signal processing and ultrasonic imaging

    OpenAIRE

    Bulavinov, A.; Joneit, D.; Kröning, M.; Bernus, L.; Dalichow, M.H.; Reddy, K.M.

    2006-01-01

    Different signal processing and image reconstruction techniques are applied in ultrasonic non-destructive material evaluation. In recent years, rapid development in the fields of microelectronics and computer engineering lead to wide application of phased array systems. A new phased array technique, called "Sampling Phased Array" has been developed in Fraunhofer Institute for non-destructive testing. It realizes unique approach of measurement and processing of ultrasonic signals. The sampling...

  16. VLSI Architecture for Configurable and Low-Complexity Design of Hard-Decision Viterbi Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2016-06-01

    Full Text Available Convolutional encoding and data decoding are fundamental processes in convolutional error correction. One of the most popular error correction methods in decoding is the Viterbi algorithm. It is extensively implemented in many digital communication applications. Its VLSI design challenges are about area, speed, power, complexity and configurability. In this research, we specifically propose a VLSI architecture for a configurable and low-complexity design of a hard-decision Viterbi decoding algorithm. The configurable and low-complexity design is achieved by designing a generic VLSI architecture, optimizing each processing element (PE at the logical operation level and designing a conditional adapter. The proposed design can be configured for any predefined number of trace-backs, only by changing the trace-back parameter value. Its computational process only needs N + 2 clock cycles latency, with N is the number of trace-backs. Its configurability function has been proven for N = 8, N = 16, N = 32 and N = 64. Furthermore, the proposed design was synthesized and evaluated in Xilinx and Altera FPGA target boards for area consumption and speed performance.

  17. CAPCAL, 3-D Capacitance Calculator for VLSI Purposes

    International Nuclear Information System (INIS)

    Seidl, Albert; Klose, Helmut; Svoboda, Mildos

    2004-01-01

    1 - Description of program or function: CAPCAL is devoted to the calculation of capacitances of three-dimensional wiring configurations are typically used in VLSI circuits. Due to analogies in the mathematical description also conductance and heat transport problems can be treated by CAPCAL. To handle the problem using CAPCAL same approximations have to be applied to the structure under investigation: - the overall geometry has to be confined to a finite domain by using symmetry-properties of the problem - Non-rectangular structures have to be simplified into an artwork of multiple boxes. 2 - Method of solution: The electrical field is described by the Laplace-equation. The differential equation is discretized by using the finite difference method. NEA-1327/01: The linear equation system is solved by using a combined ADI-multigrid method. NEA-1327/04: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. NEA-1327/05: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. 3 - Restrictions on the complexity of the problem: NEA-1327/01: Certain restrictions of use may arise from the dimensioning of arrays. Field lengths are defined via PARAMETER-statements which can easily by modified. If the geometry of the problem is defined such that Neumann boundaries are dominating the convergence of the iterative equation system solver is affected

  18. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.

    Science.gov (United States)

    Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F

    2015-10-15

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. VLSI Based Multiprocessor Communications Networks.

    Science.gov (United States)

    1982-09-01

    Networks". The contract began on September 1,1980 and was approved on scientific /technical grounds for a duration of three years. Incremental funding was...values for the individual delays will vary from comunicating modules (ij) are shown in Figure 4 module to module due to processing and fabrication

  20. CMOS VLSI Active-Pixel Sensor for Tracking

    Science.gov (United States)

    Pain, Bedabrata; Sun, Chao; Yang, Guang; Heynssens, Julie

    2004-01-01

    An architecture for a proposed active-pixel sensor (APS) and a design to implement the architecture in a complementary metal oxide semiconductor (CMOS) very-large-scale integrated (VLSI) circuit provide for some advanced features that are expected to be especially desirable for tracking pointlike features of stars. The architecture would also make this APS suitable for robotic- vision and general pointing and tracking applications. CMOS imagers in general are well suited for pointing and tracking because they can be configured for random access to selected pixels and to provide readout from windows of interest within their fields of view. However, until now, the architectures of CMOS imagers have not supported multiwindow operation or low-noise data collection. Moreover, smearing and motion artifacts in collected images have made prior CMOS imagers unsuitable for tracking applications. The proposed CMOS imager (see figure) would include an array of 1,024 by 1,024 pixels containing high-performance photodiode-based APS circuitry. The pixel pitch would be 9 m. The operations of the pixel circuits would be sequenced and otherwise controlled by an on-chip timing and control block, which would enable the collection of image data, during a single frame period, from either the full frame (that is, all 1,024 1,024 pixels) or from within as many as 8 different arbitrarily placed windows as large as 8 by 8 pixels each. A typical prior CMOS APS operates in a row-at-a-time ( grolling-shutter h) readout mode, which gives rise to exposure skew. In contrast, the proposed APS would operate in a sample-first/readlater mode, suppressing rolling-shutter effects. In this mode, the analog readout signals from the pixels corresponding to the windows of the interest (which windows, in the star-tracking application, would presumably contain guide stars) would be sampled rapidly by routing them through a programmable diagonal switch array to an on-chip parallel analog memory array. The

  1. Application of Seismic Array Processing to Tsunami Early Warning

    Science.gov (United States)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800

  2. Oxide nano-rod array structure via a simple metallurgical process

    International Nuclear Information System (INIS)

    Nanko, M; Do, D T M

    2011-01-01

    A simple method for fabricating oxide nano-rod array structure via metallurgical process is reported. Some dilute alloys such as Ni(Al) solid solution shows internal oxidation with rod-like oxide precipices during high-temperature oxidation with low oxygen partial pressure. By removing a metal part in internal oxidation zone, oxide nano-rod array structure can be developed on the surface of metallic components. In this report, Al 2 O 3 or NiAl 2 O 4 nano-rod array structures were prepared by using Ni(Al) solid solution. Effects of Cr addition into Ni(Al) solid solution on internal oxidation were also reported. Pack cementation process for aluminizing of Ni surface was applied to prepare nano-rod array components with desired shape. Near-net shape Ni components with oxide nano-rod array structure on their surface can be prepared by using the pack cementation process and internal oxidation,

  3. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  4. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  5. Multi-net optimization of VLSI interconnect

    CERN Document Server

    Moiseev, Konstantin; Wimer, Shmuel

    2015-01-01

    This book covers layout design and layout migration methodologies for optimizing multi-net wire structures in advanced VLSI interconnects. Scaling-dependent models for interconnect power, interconnect delay and crosstalk noise are covered in depth, and several design optimization problems are addressed, such as minimization of interconnect power under delay constraints, or design for minimal delay in wire bundles within a given routing area. A handy reference or a guide for design methodologies and layout automation techniques, this book provides a foundation for physical design challenges of interconnect in advanced integrated circuits.  • Describes the evolution of interconnect scaling and provides new techniques for layout migration and optimization, focusing on multi-net optimization; • Presents research results that provide a level of design optimization which does not exist in commercially-available design automation software tools; • Includes mathematical properties and conditions for optimal...

  6. DPL/Daedalus design environment (for VLSI)

    Energy Technology Data Exchange (ETDEWEB)

    Batali, J; Mayle, N; Shrobe, H; Sussman, G; Weise, D

    1981-01-01

    The DPL/Daedalus design environment is an interactive VLSI design system implemented at the MIT Artificial Intelligence Laboratory. The system consists of several components: a layout language called DPL (for design procedure language); an interactive graphics facility (Daedalus); and several special purpose design procedures for constructing complex artifacts such as PLAs and microprocessor data paths. Coordinating all of these is a generalized property list data base which contains both the data representing circuits and the procedures for constructing them. The authors first review the nature of the data base and then turn to DPL and Daedalus, the two most common ways of entering information into the data base. The next two sections review the specialized procedures for constructing PLAs and data paths; the final section describes a tool for hierarchical node extraction. 5 references.

  7. Development methods for VLSI-processors

    International Nuclear Information System (INIS)

    Horninger, K.; Sandweg, G.

    1982-01-01

    The aim of this project, which was originally planed for 3 years, was the development of modern system and circuit concepts, for VLSI-processors having a 32 bit wide data path. The result of this first years work is the concept of a general purpose processor. This processor is not only logically but also physically (on the chip) divided into four functional units: a microprogrammable instruction unit, an execution unit in slice technique, a fully associative cache memory and an I/O unit. For the ALU of the execution unit circuits in PLA and slice techniques have been realized. On the basis of regularity, area consumption and achievable performance the slice technique has been prefered. The designs utilize selftesting circuitry. (orig.) [de

  8. VLSI Design of Trusted Virtual Sensors

    Directory of Open Access Journals (Sweden)

    Macarena C. Martínez-Rodríguez

    2018-01-01

    Full Text Available This work presents a Very Large Scale Integration (VLSI design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF based on a Static Random Access Memory (SRAM to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time.

  9. VLSI Design of Trusted Virtual Sensors.

    Science.gov (United States)

    Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada

    2018-01-25

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).

  10. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    Science.gov (United States)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  11. Efficient processing of two-dimensional arrays with C or C++

    Science.gov (United States)

    Donato, David I.

    2017-07-20

    Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency

  12. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    National Research Council Canada - National Science Library

    Horiuchi, Timothy K; Krishnaprasad, P. S

    2007-01-01

    .... This includes multiple efforts related to a VLSI-based echolocation system being developed in one of our laboratories from algorithm development, bat flight data analysis, to VLSI circuit design...

  13. Sampling phased array - a new technique for ultrasonic signal processing and imaging

    OpenAIRE

    Verkooijen, J.; Boulavinov, A.

    2008-01-01

    Over the past 10 years, the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called 'Sampling Phased Array', has been developed in the Fraunhofer Institute for Non-Destructive Testing([1]). It realises a unique approach of measurement and processing of ultrasonic signals. Th...

  14. Sampling phased array, a new technique for ultrasonic signal processing and imaging now available to industry

    OpenAIRE

    Verkooijen, J.; Bulavinov, A.

    2008-01-01

    Over the past 10 years the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called "Sampling Phased Array" has been developed in the Fraunhofer Institute for non-destructive testing [1]. It realizes a unique approach of measurement and processing of ultrasonic signals. The s...

  15. Studies of implosion processes of nested tungsten wire-array Z-pinch

    International Nuclear Information System (INIS)

    Ning Cheng; Ding Ning; Liu Quan; Yang Zhenhua

    2006-01-01

    Nested wire-array is a kind of promising structured-load because it can improve the quality of Z-pinch plasma and enhance the radiation power of X-ray source. Based on the zero-dimensional model, the assumption of wire-array collision, and the criterion of optimized load (maximal load kinetic energy), optimization of the typical nested wire-array as a load of Z machine at Sandia Laboratory was carried out. It was shown that the load has been basically optimized. The Z-pinch process of the typical load was numerically studied by means of one-dimensional three-temperature radiation magneto-hydrodynamics (RMHD) code. The obtained results reproduce the dynamic process of the Z-pinch and show the implosion trajectory of nested wire-array and the transfer process of drive current between the inner and outer array. The experimental and computational X-ray pulse was compared, and it was suggested that the assumption of wire-array collision was reasonable in nested wire-array Z-pinch at least for the current level of Z machine. (authors)

  16. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  17. Signal and array processing techniques for RFID readers

    Science.gov (United States)

    Wang, Jing; Amin, Moeness; Zhang, Yimin

    2006-05-01

    Radio Frequency Identification (RFID) has recently attracted much attention in both the technical and business communities. It has found wide applications in, for example, toll collection, supply-chain management, access control, localization tracking, real-time monitoring, and object identification. Situations may arise where the movement directions of the tagged RFID items through a portal is of interest and must be determined. Doppler estimation may prove complicated or impractical to perform by RFID readers. Several alternative approaches, including the use of an array of sensors with arbitrary geometry, can be applied. In this paper, we consider direction-of-arrival (DOA) estimation techniques for application to near-field narrowband RFID problems. Particularly, we examine the use of a pair of RFID antennas to track moving RFID tagged items through a portal. With two antennas, the near-field DOA estimation problem can be simplified to a far-field problem, yielding a simple way for identifying the direction of the tag movement, where only one parameter, the angle, needs to be considered. In this case, tracking of the moving direction of the tag simply amounts to computing the spatial cross-correlation between the data samples received at the two antennas. It is pointed out that the radiation patterns of the reader and tag antennas, particularly their phase characteristics, have a significant effect on the performance of DOA estimation. Indoor experiments are conducted in the Radar Imaging and RFID Labs at Villanova University for validating the proposed technique for target movement direction estimations.

  18. Built-in self-repair of VLSI memories employing neural nets

    Science.gov (United States)

    Mazumder, Pinaki

    1998-10-01

    The decades of the Eighties and the Nineties have witnessed the spectacular growth of VLSI technology, when the chip size has increased from a few hundred devices to a staggering multi-millon transistors. This trend is expected to continue as the CMOS feature size progresses towards the nanometric dimension of 100 nm and less. SIA roadmap projects that, where as the DRAM chips will integrate over 20 billion devices in the next millennium, the future microprocessors may incorporate over 100 million transistors on a single chip. As the VLSI chip size increase, the limited accessibility of circuit components poses great difficulty for external diagnosis and replacement in the presence of faulty components. For this reason, extensive work has been done in built-in self-test techniques, but little research is known concerning built-in self-repair. Moreover, the extra hardware introduced by conventional fault-tolerance techniques is also likely to become faulty, therefore causing the circuit to be useless. This research demonstrates the feasibility of implementing electronic neural networks as intelligent hardware for memory array repair. Most importantly, we show that the neural network control possesses a robust and degradable computing capability under various fault conditions. Overall, a yield analysis performed on 64K DRAM's shows that the yield can be improved from as low as 20 percent to near 99 percent due to the self-repair design, with overhead no more than 7 percent.

  19. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  20. Adaptive WTA with an analog VLSI neuromorphic learning chip.

    Science.gov (United States)

    Häfliger, Philipp

    2007-03-01

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.

  1. Monolithic active pixel sensors (MAPS) in a VLSI CMOS technology

    CERN Document Server

    Turchetta, R; Manolopoulos, S; Tyndel, M; Allport, P P; Bates, R; O'Shea, V; Hall, G; Raymond, M

    2003-01-01

    Monolithic Active Pixel Sensors (MAPS) designed in a standard VLSI CMOS technology have recently been proposed as a compact pixel detector for the detection of high-energy charged particle in vertex/tracking applications. MAPS, also named CMOS sensors, are already extensively used in visible light applications. With respect to other competing imaging technologies, CMOS sensors have several potential advantages in terms of low cost, low power, lower noise at higher speed, random access of pixels which allows windowing of region of interest, ability to integrate several functions on the same chip. This brings altogether to the concept of 'camera-on-a-chip'. In this paper, we review the use of CMOS sensors for particle physics and we analyse their performances in term of the efficiency (fill factor), signal generation, noise, readout speed and sensor area. In most of high-energy physics applications, data reduction is needed in the sensor at an early stage of the data processing before transfer of the data to ta...

  2. An electron undulating ring for VLSI lithography

    International Nuclear Information System (INIS)

    Tomimasu, T.; Mikado, T.; Noguchi, T.; Sugiyama, S.; Yamazaki, T.

    1985-01-01

    The development of the ETL storage ring ''TERAS'' as an undulating ring has been continued to achieve a wide area exposure of synchrotron radiation (SR) in VLSI lithography. Stable vertical and horizontal undulating motions of stored beams are demonstrated around a horizontal design orbit of TERAS, using two small steering magnets of which one is used for vertical undulating and another for horizontal one. Each steering magnet is inserted into one of the periodic configulation of guide field elements. As one of useful applications of undulaing electron beams, a vertically wide exposure of SR has been demonstrated in the SR lithography. The maximum vertical deviation from the design orbit nCcurs near the steering magnet. The maximum vertical tilt angle of the undulating beam near the nodes is about + or - 2mrad for a steering magnetic field of 50 gauss. Another proposal is for hith-intensity, uniform and wide exposure of SR from a wiggler installed in TERAS, using vertical and horizontal undulating motions of stored beams. A 1.4 m long permanent magnet wiggler has been installed for this purpose in this April

  3. Multi-valued LSI/VLSI logic design

    Science.gov (United States)

    Santrakul, K.

    A procedure for synthesizing any large complex logic system, such as LSI and VLSI integrated circuits is described. This scheme uses Multi-Valued Multi-plexers (MVMUX) as the basic building blocks and the tree as the structure of the circuit realization. Simple built-in test circuits included in the network (the main current), provide a thorough functional checking of the network at any time. In brief, four major contributions are made: (1) multi-valued Algorithmic State Machine (ASM) chart for describing an LSI/VLSI behavior; (2) a tree-structured multi-valued multiplexer network which can be obtained directly from an ASM chart; (3) a heuristic tree-structured synthesis method for realizing any combinational logic with minimal or nearly-minimal MVMUX; and (4) a hierarchical design of LSI/VLSI with built-in parallel testing capability.

  4. APD arrays and large-area APDs via a new planar process

    CERN Document Server

    Farrell, R; Vanderpuye, K; Grazioso, R; Myers, R; Entine, G

    2000-01-01

    A fabrication process has been developed which allows the beveled-edge-type of avalanche photodiode (APD) to be made without the need for the artful bevel formation steps. This new process, applicable to both APD arrays and to discrete detectors, greatly simplifies manufacture and should lead to significant cost reduction for such photodetectors. This is achieved through a simple innovation that allows isolation around the device or array pixel to be brought into the plane of the surface of the silicon wafer, hence a planar process. A description of the new process is presented along with performance data for a variety of APD device and array configurations. APD array pixel gains in excess of 10 000 have been measured. Array pixel coincidence timing resolution of less than 5 ns has been demonstrated. An energy resolution of 6% for 662 keV gamma-rays using a CsI(T1) scintillator on a planar processed large-area APD has been recorded. Discrete APDs with active areas up to 13 cm sup 2 have been operated.

  5. Assembly and Integration Process of the First High Density Detector Array for the Atacama Cosmology Telescope

    Science.gov (United States)

    Li, Yaqiong; Choi, Steve; Ho, Shuay-Pwu; Crowley, Kevin T.; Salatino, Maria; Simon, Sara M.; Staggs, Suzanne T.; Nati, Federico; Wollack, Edward J.

    2016-01-01

    The Advanced ACTPol (AdvACT) upgrade on the Atacama Cosmology Telescope (ACT) consists of multichroicTransition Edge Sensor (TES) detector arrays to measure the Cosmic Microwave Background (CMB) polarization anisotropies in multiple frequency bands. The first AdvACT detector array, sensitive to both 150 and 230 GHz, is fabricated on a 150 mm diameter wafer and read out with a completely different scheme compared to ACTPol. Approximately 2000 TES bolometers are packed into the wafer leading to both a much denser detector density and readout circuitry. The demonstration of the assembly and integration of the AdvACT arrays is important for the next generation CMB experiments, which will continue to increase the pixel number and density. We present the detailed assembly process of the first AdvACT detector array.

  6. Using adaptive antenna array in LTE with MIMO for space-time processing

    Directory of Open Access Journals (Sweden)

    Abdourahamane Ahmed Ali

    2015-04-01

    Full Text Available The actual methods of improvement the existent wireless transmission systems are proposed. Mathematical apparatus is considered and proved by models, graph of which are shown, using the adaptive array antenna in LTE with MIMO for space-time processing. The results show that improvements, which are joined with space-time processing, positively reflects on LTE cell size or on throughput

  7. NASA Space Engineering Research Center for VLSI systems design

    Science.gov (United States)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  8. Handbook of VLSI chip design and expert systems

    CERN Document Server

    Schwarz, A F

    1993-01-01

    Handbook of VLSI Chip Design and Expert Systems provides information pertinent to the fundamental aspects of expert systems, which provides a knowledge-based approach to problem solving. This book discusses the use of expert systems in every possible subtask of VLSI chip design as well as in the interrelations between the subtasks.Organized into nine chapters, this book begins with an overview of design automation, which can be identified as Computer-Aided Design of Circuits and Systems (CADCAS). This text then presents the progress in artificial intelligence, with emphasis on expert systems.

  9. VLSI micro- and nanophotonics science, technology, and applications

    CERN Document Server

    Lee, El-Hang; Razeghi, Manijeh; Jagadish, Chennupati

    2011-01-01

    Addressing the growing demand for larger capacity in information technology, VLSI Micro- and Nanophotonics: Science, Technology, and Applications explores issues of science and technology of micro/nano-scale photonics and integration for broad-scale and chip-scale Very Large Scale Integration photonics. This book is a game-changer in the sense that it is quite possibly the first to focus on ""VLSI Photonics"". Very little effort has been made to develop integration technologies for micro/nanoscale photonic devices and applications, so this reference is an important and necessary early-stage pe

  10. A FPGA-based signal processing unit for a GEM array detector

    International Nuclear Information System (INIS)

    Yen, W.W.; Chou, H.P.

    2013-06-01

    in the present study, a signal processing unit for a GEM one-dimensional array detector is presented to measure the trajectory of photoelectrons produced by cosmic X-rays. The present GEM array detector system has 16 signal channels. The front-end unit provides timing signals from trigger units and energy signals from charge sensitive amplifies. The prototype of the processing unit is implemented using commercial field programmable gate array circuit boards. The FPGA based system is linked to a personal computer for testing and data analysis. Tests using simulated signals indicated that the FPGA-based signal processing unit has a good linearity and is flexible for parameter adjustment for various experimental conditions (authors)

  11. FILTRES: a 128 channels VLSI mixed front-end readout electronic development for microstrip detectors

    International Nuclear Information System (INIS)

    Anstotz, F.; Hu, Y.; Michel, J.; Sohler, J.L.; Lachartre, D.

    1998-01-01

    We present a VLSI digital-analog readout electronic chain for silicon microstrip detectors. The characteristics of this circuit have been optimized for the high resolution tracker of the CERN CMS experiment. This chip consists of 128 channels at 50 μm pitch. Each channel is composed by a charge amplifier, a CR-RC shaper, an analog memory, an analog processor, an output FIFO read out serially by a multiplexer. This chip has been processed in the radiation hard technology DMILL. This paper describes the architecture of the circuit and presents test results of the 128 channel full chain chip. (orig.)

  12. Characterization of diffusivity based on spherical array processing

    DEFF Research Database (Denmark)

    Nolan, Melanie; Fernandez Grande, Efren; Jeong, Cheol-Ho

    2015-01-01

    -dimensional domain and consequently examine some of its fundamental properties: spatial distribution of sound pressure levels, particle velocity and sound intensity. The study allows for visualization of the intensity field inside a reverberant space, and successfully illustrates the behavior of the sound field...... in such an environment. This initial investigation shows the validity of the suggested processing and reveals interesting perspectives for future work. Ultimately, the aim is to define a proper and reliable measure of the diffuse sound field conditions in a reverberation chamber, with the prospect of improving...

  13. A multi-step electrochemical etching process for a three-dimensional micro probe array

    International Nuclear Information System (INIS)

    Kim, Yoonji; Youn, Sechan; Cho, Young-Ho; Park, HoJoon; Chang, Byeung Gyu; Oh, Yong Soo

    2011-01-01

    We present a simple, fast, and cost-effective process for three-dimensional (3D) micro probe array fabrication using multi-step electrochemical metal foil etching. Compared to the previous electroplating (add-on) process, the present electrochemical (subtractive) process results in well-controlled material properties of the metallic microstructures. In the experimental study, we describe the single-step and multi-step electrochemical aluminum foil etching processes. In the single-step process, the depth etch rate and the bias etch rate of an aluminum foil have been measured as 1.50 ± 0.10 and 0.77 ± 0.03 µm min −1 , respectively. On the basis of the single-step process results, we have designed and performed the two-step electrochemical etching process for the 3D micro probe array fabrication. The fabricated 3D micro probe array shows the vertical and lateral fabrication errors of 15.5 ± 5.8% and 3.3 ± 0.9%, respectively, with the surface roughness of 37.4 ± 9.6 nm. The contact force and the contact resistance of the 3D micro probe array have been measured to be 24.30 ± 0.98 mN and 2.27 ± 0.11 Ω, respectively, for an overdrive of 49.12 ± 1.25 µm.

  14. DBPM signal processing with field programmable gate arrays

    International Nuclear Information System (INIS)

    Lai Longwei; Yi Xing; Zhang Ning; Yang Guisen; Wang Baopeng; Xiong Yun; Leng Yongbin; Yan Yingbing

    2011-01-01

    DBPM system performance is determined by the design and implementation of beam position signal processing algorithm. In order to develop the system, a beam position signal processing algorithm is implemented on FPGA. The hardware is a PMC board ICS-1554A-002 (GE Corp.) with FPGA chip XC5VSX95T. This paper adopts quadrature frequency mixing to down convert high frequency signal to base. Different from conventional method, the mixing is implemented by CORDIC algorithm. The algorithm theory and implementation details are discussed in this paper. As the board contains no front end gain controller, this paper introduces a published patent-pending technique that has been adopted to realize the function in digital logic. The whole design is implemented with VHDL language. An on-line evaluation has been carried on SSRF (Shanghai Synchrotron Radiation Facility)storage ring. Results indicate that the system turn-by-turn data can measure the real beam movement accurately,and system resolution is 1.1μm. (authors)

  15. High-resolution imaging methods in array signal processing

    DEFF Research Database (Denmark)

    Xenaki, Angeliki

    in active sonar signal processing for detection and imaging of submerged oil contamination in sea water from a deep-water oil leak. The submerged oil _eld is modeled as a uid medium exhibiting spatial perturbations in the acoustic parameters from their mean ambient values which cause weak scattering...... of the incident acoustic energy. A highfrequency active sonar is selected to insonify the medium and receive the backscattered waves. High-frequency acoustic methods can both overcome the optical opacity of water (unlike methods based on electromagnetic waves) and resolve the small-scale structure...... of the submerged oil field (unlike low-frequency acoustic methods). The study shows that high-frequency acoustic methods are suitable not only for large-scale localization of the oil contamination in the water column but also for statistical characterization of the submerged oil field through inference...

  16. Data array acquisition and joint processing in local plasma spectroscopy

    International Nuclear Information System (INIS)

    Ekimov, K.; Luizova, L.; Soloviev, A.; Khakhaev, A.

    2005-01-01

    The setup and software for optical emission spectroscopy with spatial and temporal resolutions were developed. The automated installation includes LabView compatible instrument interfaces. The algorithm of joint data processing is based on principal component method and allows the increase in stability of results of the radial transform and the instrument distortion elimination in the presence of noises. The system is applied to diagnostics of the arc discharge in mercury vapors with the addition of thallium. The distributions of ground state and excited mercury atoms, excited thallium atoms and electron density over the arc cross section have been measured on the basis of analysis of spectral line shapes. The Saha balance between electron and high lying excited states densities was checked. An unexpected broadening of some thallium spectral lines was found out

  17. Numerical Simulation of the Diffusion Processes in Nanoelectrode Arrays Using an Axial Neighbor Symmetry Approximation.

    Science.gov (United States)

    Peinetti, Ana Sol; Gilardoni, Rodrigo S; Mizrahi, Martín; Requejo, Felix G; González, Graciela A; Battaglini, Fernando

    2016-06-07

    Nanoelectrode arrays have introduced a complete new battery of devices with fascinating electrocatalytic, sensitivity, and selectivity properties. To understand and predict the electrochemical response of these arrays, a theoretical framework is needed. Cyclic voltammetry is a well-fitted experimental technique to understand the undergoing diffusion and kinetics processes. Previous works describing microelectrode arrays have exploited the interelectrode distance to simulate its behavior as the summation of individual electrodes. This approach becomes limited when the size of the electrodes decreases to the nanometer scale due to their strong radial effect with the consequent overlapping of the diffusional fields. In this work, we present a computational model able to simulate the electrochemical behavior of arrays working either as the summation of individual electrodes or being affected by the overlapping of the diffusional fields without previous considerations. Our computational model relays in dividing a regular electrode array in cells. In each of them, there is a central electrode surrounded by neighbor electrodes; these neighbor electrodes are transformed in a ring maintaining the same active electrode area than the summation of the closest neighbor electrodes. Using this axial neighbor symmetry approximation, the problem acquires a cylindrical symmetry, being applicable to any diffusion pattern. The model is validated against micro- and nanoelectrode arrays showing its ability to predict their behavior and therefore to be used as a designing tool.

  18. Artificial immune system algorithm in VLSI circuit configuration

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.

  19. Numerical analysis of electromigration in thin film VLSI interconnections

    NARCIS (Netherlands)

    Petrescu, V.; Mouthaan, A.J.; Schoenmaker, W.; Angelescu, S.; Vissarion, R.; Dima, G.; Wallinga, Hans; Profirescu, M.D.

    1995-01-01

    Due to the continuing downscaling of the dimensions in VLSI circuits, electromigration is becoming a serious reliability hazard. A software tool based on finite element analysis has been developed to solve the two partial differential equations of the two particle vacancy/imperfection model.

  20. Hybrid VLSI/QCA Architecture for Computing FFTs

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  1. Assessment of low-cost manufacturing process sequences. [photovoltaic solar arrays

    Science.gov (United States)

    Chamberlain, R. G.

    1979-01-01

    An extensive research and development activity to reduce the cost of manufacturing photovoltaic solar arrays by a factor of approximately one hundred is discussed. Proposed and actual manufacturing process descriptions were compared to manufacturing costs. An overview of this methodology is presented.

  2. Assessment of Measurement Distortions in GNSS Antenna Array Space-Time Processing

    Directory of Open Access Journals (Sweden)

    Thyagaraja Marathe

    2016-01-01

    Full Text Available Antenna array processing techniques are studied in GNSS as effective tools to mitigate interference in spatial and spatiotemporal domains. However, without specific considerations, the array processing results in biases and distortions in the cross-ambiguity function (CAF of the ranging codes. In space-time processing (STP the CAF misshaping can happen due to the combined effect of space-time processing and the unintentional signal attenuation by filtering. This paper focuses on characterizing these degradations for different controlled signal scenarios and for live data from an antenna array. The antenna array simulation method introduced in this paper enables one to perform accurate analyses in the field of STP. The effects of relative placement of the interference source with respect to the desired signal direction are shown using overall measurement errors and profile of the signal strength. Analyses of contributions from each source of distortion are conducted individually and collectively. Effects of distortions on GNSS pseudorange errors and position errors are compared for blind, semi-distortionless, and distortionless beamforming methods. The results from characterization can be useful for designing low distortion filters that are especially important for high accuracy GNSS applications in challenging environments.

  3. Solution processed bismuth sulfide nanowire array core/silver shuffle shell solar cells

    NARCIS (Netherlands)

    Cao, Y.; Bernechea, M.; Maclachlan, A.; Zardetto, V.; Creatore, M.; Haque, S.A.; Konstantatos, G.

    2015-01-01

    Low bandgap inorganic semiconductor nanowires have served as building blocks in solution processed solar cells to improve their power conversion capacity and reduce fabrication cost. In this work, we first reported bismuth sulfide nanowire arrays grown from colloidal seeds on a transparent

  4. Increasing the specificity and function of DNA microarrays by processing arrays at different stringencies

    DEFF Research Database (Denmark)

    Dufva, Martin; Petersen, Jesper; Poulsen, Lena

    2009-01-01

    DNA microarrays have for a decade been the only platform for genome-wide analysis and have provided a wealth of information about living organisms. DNA microarrays are processed today under one condition only, which puts large demands on assay development because all probes on the array need to f...

  5. Solution-processed single-wall carbon nanotube transistor arrays for wearable display backplanes

    Directory of Open Access Journals (Sweden)

    Byeong-Cheol Kang

    2018-01-01

    Full Text Available In this paper, we demonstrate solution-processed single-wall carbon nanotube thin-film transistor (SWCNT-TFT arrays with polymeric gate dielectrics on the polymeric substrates for wearable display backplanes, which can be directly attached to the human body. The optimized SWCNT-TFTs without any buffer layer on flexible substrates exhibit a linear field-effect mobility of 1.5cm2/V-s and a threshold voltage of around 0V. The statistical plot of the key device metrics extracted from 35 SWCNT-TFTs which were fabricated in different batches at different times conclusively support that we successfully demonstrated high-performance solution-processed SWCNT-TFT arrays which demand excellent uniformity in the device performance. We also investigate the operational stability of wearable SWCNT-TFT arrays against an applied strain of up to 40%, which is the essential for a harsh degree of strain on human body. We believe that the demonstration of flexible SWCNT-TFT arrays which were fabricated by all solution-process except the deposition of metal electrodes at process temperature below 130oC can open up new routes for wearable display backplanes.

  6. Astronomical Data Processing Using SciQL, an SQL Based Query Language for Array Data

    Science.gov (United States)

    Zhang, Y.; Scheers, B.; Kersten, M.; Ivanova, M.; Nes, N.

    2012-09-01

    SciQL (pronounced as ‘cycle’) is a novel SQL-based array query language for scientific applications with both tables and arrays as first class citizens. SciQL lowers the entrance fee of adopting relational DBMS (RDBMS) in scientific domains, because it includes functionality often only found in mathematics software packages. In this paper, we demonstrate the usefulness of SciQL for astronomical data processing using examples from the Transient Key Project of the LOFAR radio telescope. In particular, how the LOFAR light-curve database of all detected sources can be constructed, by correlating sources across the spatial, frequency, time and polarisation domains.

  7. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  8. Frequency Diverse Array Radar Signal Processing via Space-Range-Doppler Focus (SRDF Method

    Directory of Open Access Journals (Sweden)

    Chen Xiaolong

    2018-04-01

    Full Text Available To meet the urgent demand of low-observable moving target detection in complex environments, a novel method of Frequency Diverse Array (FDA radar signal processing method based on Space-Rang-Doppler Focusing (SRDF is proposed in this paper. The current development status of the FDA radar, the design of the array structure, beamforming, and joint estimation of distance and angle are systematically reviewed. The extra degrees of freedom provided by FDA radar are fully utilizsed, which include the Degrees Of Freedom (DOFs of the transmitted waveform, the location of array elements, correlation of beam azimuth and distance, and the long dwell time, which are also the DOFs in joint spatial (angle, distance, and frequency (Doppler dimensions. Simulation results show that the proposed method has the potential of improving target detection and parameter estimation for weak moving targets in complex environments and has broad application prospects in clutter and interference suppression, moving target refinement, etc..

  9. Processing and display of three-dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Antoine, M.; Darier, P.

    1986-04-01

    The analysis of three-dimensional (3-D) arrays of numerical data from medical, industrial or scientific imaging, by synthetic generation of realistic images, has been widely developed. The Octree encoding, that organizes the volume data in a hierarchical tree structure, has some interesting features for 3-D arrays of data processing. The Octree encoding method, based on the recursive subdivision of a 3-D array, is an extension of the Quadtree encoding in the two-dimensional plane. We have developed a software package to validate the basic Octree encoding methodology for some manipulation and display operations of volume data. The contribution introduces the technique we have used (called ''overlay technique'') to make the projection operation of an Octree on a Quadtree encoded image plane. The application of this technique to the hidden surface display is presented [fr

  10. Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -

    Science.gov (United States)

    Chen, Paul Peichuan

    1993-01-01

    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.

  11. Seismic array processing and computational infrastructure for improved monitoring of Alaskan and Aleutian seismicity and volcanoes

    Science.gov (United States)

    Lindquist, Kent Gordon

    We constructed a near-real-time system, called Iceworm, to automate seismic data collection, processing, storage, and distribution at the Alaska Earthquake Information Center (AEIC). Phase-picking, phase association, and interprocess communication components come from Earthworm (U.S. Geological Survey). A new generic, internal format for digital data supports unified handling of data from diverse sources. A new infrastructure for applying processing algorithms to near-real-time data streams supports automated information extraction from seismic wavefields. Integration of Datascope (U. of Colorado) provides relational database management of all automated measurements, parametric information for located hypocenters, and waveform data from Iceworm. Data from 1997 yield 329 earthquakes located by both Iceworm and the AEIC. Of these, 203 have location residuals under 22 km, sufficient for hazard response. Regionalized inversions for local magnitude in Alaska yield Msb{L} calibration curves (logAsb0) that differ from the Californian Richter magnitude. The new curve is 0.2\\ Msb{L} units more attenuative than the Californian curve at 400 km for earthquakes north of the Denali fault. South of the fault, and for a region north of Cook Inlet, the difference is 0.4\\ Msb{L}. A curve for deep events differs by 0.6\\ Msb{L} at 650 km. We expand geographic coverage of Alaskan regional seismic monitoring to the Aleutians, the Bering Sea, and the entire Arctic by initiating the processing of four short-period, Alaskan seismic arrays. To show the array stations' sensitivity, we detect and locate two microearthquakes that were missed by the AEIC. An empirical study of the location sensitivity of the arrays predicts improvements over the Alaskan regional network that are shown as map-view contour plots. We verify these predictions by detecting an Msb{L} 3.2 event near Unimak Island with one array. The detection and location of four representative earthquakes illustrates the expansion

  12. Pursuit, Avoidance, and Cohesion in Flight: Multi-Purpose Control Laws and Neuromorphic VLSI

    Science.gov (United States)

    2010-10-01

    spatial navigation in mammals. We have designed, fabricated, and are now testing a neuromorphic VLSI chip that implements a spike-based, attractor...Control Laws and Neuromorphic VLSI 5a. CONTRACT NUMBER 070402-7705 5b. GRANT NUMBER FA9550-07-1-0446 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...implementations (custom Neuromorphic VLSI and robotics) we will apply important practical constraints that can lead to deeper insight into how and why efficient

  13. Structural control of ultra-fine CoPt nanodot arrays via electrodeposition process

    Energy Technology Data Exchange (ETDEWEB)

    Wodarz, Siggi [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan); Hasegawa, Takashi; Ishio, Shunji [Department of Materials Science, Akita University, Akita City 010-8502 (Japan); Homma, Takayuki, E-mail: t.homma@waseda.jp [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan)

    2017-05-15

    CoPt nanodot arrays were fabricated by combining electrodeposition and electron beam lithography (EBL) for the use of bit-patterned media (BPM). To achieve precise control of deposition uniformity and coercivity of the CoPt nanodot arrays, their crystal structure and magnetic properties were controlled by controlling the diffusion state of metal ions from the initial deposition stage with the application of bath agitation. Following bath agitation, the composition gradient of the CoPt alloy with thickness was mitigated to have a near-ideal alloy composition of Co:Pt =80:20, which induces epitaxial-like growth from Ru substrate, thus resulting in the improvement of the crystal orientation of the hcp (002) structure from its initial deposition stages. Furthermore, the cross-sectional transmission electron microscope (TEM) analysis of the nanodots deposited with bath agitation showed CoPt growth along its c-axis oriented in the perpendicular direction, having uniform lattice fringes on the hcp (002) plane from the Ru underlayer interface, which is a significant factor to induce perpendicular magnetic anisotropy. Magnetic characterization of the CoPt nanodot arrays showed increase in the perpendicular coercivity and squareness of the hysteresis loops from 2.0 kOe and 0.64 (without agitation) to 4.0 kOe and 0.87 with bath agitation. Based on the detailed characterization of nanodot arrays, the precise crystal structure control of the nanodot arrays with ultra-high recording density by electrochemical process was successfully demonstrated. - Highlights: • Ultra-fine CoPt nanodot arrays were fabricated by electrodeposition. • Crystallinity of hcp (002) was improved with uniform composition formation. • Uniform formation of hcp lattices leads to an increase in the coercivity.

  14. An engineering methodology for implementing and testing VLSI (Very Large Scale Integrated) circuits

    Science.gov (United States)

    Corliss, Walter F., II

    1989-03-01

    The engineering methodology for producing a fully tested VLSI chip from a design layout is presented. A 16-bit correlator, NPS CORN88, that was previously designed, was used as a vehicle to demonstrate this methodology. The study of the design and simulation tools, MAGIC and MOSSIM II, was the focus of the design and validation process. The design was then implemented and the chip was fabricated by MOSIS. This fabricated chip was then used to develop a testing methodology for using the digital test facilities at NPS. NPS CORN88 was the first full custom VLSI chip, designed at NPS, to be tested with the NPS digital analysis system, Tektronix DAS 9100 series tester. The capabilities and limitations of these test facilities are examined. NPS CORN88 test results are included to demonstrate the capabilities of the digital test system. A translator, MOS2DAS, was developed to convert the MOSSIM II simulation program to the input files required by the DAS 9100 device verification software, 91DVS. Finally, a tutorial for using the digital test facilities, including the DAS 9100 and associated support equipments, is included as an appendix.

  15. Processing and display of medical three dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Darier, P.

    1985-01-01

    Imaging modalities such as X-ray computerized Tomography (CT), Nuclear Medicine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging. We are currently developing experimental software that allows the analysis, processing and display of 3-D arrays of numerical data that are organized in a related hierarchical data structure using OCTREE (octal-tree) encoding technique based on a recursive subdivision of the data volume. The OCTREE encoding structure is an extension of the two-dimensional tree structure: the quadtree, developed for image processing applications. Before any operations, the 3-D array of data is OCTREE encoded, thereafter all processings are concerned with the encoded object. The elementary process for the elaboration of a synthetic image includes: conditioning the volume: volume partition (numerical and spatial segmentation), choice of the view-point..., two dimensional display, either by spatial integration (radiography) or by shaded surface representation. This paper introduces these different concepts and specifies the advantages of OCTREE encoding techniques in realizing these operations. Furthermore the application of the OCTREE encoding scheme to the display of 3-D medical volumes generated from multiple CT scans is presented

  16. NeuroSeek dual-color image processing infrared focal plane array

    Science.gov (United States)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  17. Processors and systems (picture processing)

    Energy Technology Data Exchange (ETDEWEB)

    Gemmar, P

    1983-01-01

    Automatic picture processing requires high performance computers and high transmission capacities in the processor units. The author examines the possibilities of operating processors in parallel in order to accelerate the processing of pictures. He therefore discusses a number of available processors and systems for picture processing and illustrates their capacities for special types of picture processing. He stresses the fact that the amount of storage required for picture processing is exceptionally high. The author concludes that it is as yet difficult to decide whether very large groups of simple processors or highly complex multiprocessor systems will provide the best solution. Both methods will be aided by the development of VLSI. New solutions have already been offered (systolic arrays and 3-d processing structures) but they also are subject to losses caused by inherently parallel algorithms. Greater efforts must be made to produce suitable software for multiprocessor systems. Some possibilities for future picture processing systems are discussed. 33 references.

  18. Effect of Source, Surfactant, and Deposition Process on Electronic Properties of Nanotube Arrays

    Directory of Open Access Journals (Sweden)

    Dheeraj Jain

    2011-01-01

    Full Text Available The electronic properties of arrays of carbon nanotubes from several different sources differing in the manufacturing process used with a variety of average properties such as length, diameter, and chirality are studied. We used several common surfactants to disperse each of these nanotubes and then deposited them on Si wafers from their aqueous solutions using dielectrophoresis. Transport measurements were performed to compare and determine the effect of different surfactants, deposition processes, and synthesis processes on nanotubes synthesized using CVD, CoMoCAT, laser ablation, and HiPCO.

  19. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    Science.gov (United States)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  20. Lightweight solar array blanket tooling, laser welding and cover process technology

    Science.gov (United States)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  1. Advanced symbolic analysis for VLSI systems methods and applications

    CERN Document Server

    Shi, Guoyong; Tlelo Cuautle, Esteban

    2014-01-01

    This book provides comprehensive coverage of the recent advances in symbolic analysis techniques for design automation of nanometer VLSI systems. The presentation is organized in parts of fundamentals, basic implementation methods and applications for VLSI design. Topics emphasized include  statistical timing and crosstalk analysis, statistical and parallel analysis, performance bound analysis and behavioral modeling for analog integrated circuits . Among the recent advances, the Binary Decision Diagram (BDD) based approaches are studied in depth. The BDD-based hierarchical symbolic analysis approaches, have essentially broken the analog circuit size barrier. In particular, this book   • Provides an overview of classical symbolic analysis methods and a comprehensive presentation on the modern  BDD-based symbolic analysis techniques; • Describes detailed implementation strategies for BDD-based algorithms, including the principles of zero-suppression, variable ordering and canonical reduction; • Int...

  2. Trace-based post-silicon validation for VLSI circuits

    CERN Document Server

    Liu, Xiao

    2014-01-01

    This book first provides a comprehensive coverage of state-of-the-art validation solutions based on real-time signal tracing to guarantee the correctness of VLSI circuits.  The authors discuss several key challenges in post-silicon validation and provide automated solutions that are systematic and cost-effective.  A series of automatic tracing solutions and innovative design for debug (DfD) techniques are described, including techniques for trace signal selection for enhancing visibility of functional errors, a multiplexed signal tracing strategy for improving functional error detection, a tracing solution for debugging electrical errors, an interconnection fabric for increasing data bandwidth and supporting multi-core debug, an interconnection fabric design and optimization technique to increase transfer flexibility and a DfD design and associated tracing solution for improving debug efficiency and expanding tracing window. The solutions presented in this book improve the validation quality of VLSI circuit...

  3. Emerging Applications for High K Materials in VLSI Technology

    Science.gov (United States)

    Clark, Robert D.

    2014-01-01

    The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI) manufacturing for leading edge Dynamic Random Access Memory (DRAM) and Complementary Metal Oxide Semiconductor (CMOS) applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM) diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD) is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing. PMID:28788599

  4. Emerging Applications for High K Materials in VLSI Technology

    Directory of Open Access Journals (Sweden)

    Robert D. Clark

    2014-04-01

    Full Text Available The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI manufacturing for leading edge Dynamic Random Access Memory (DRAM and Complementary Metal Oxide Semiconductor (CMOS applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing.

  5. Monitoring and Evaluation of Alcoholic Fermentation Processes Using a Chemocapacitor Sensor Array

    Science.gov (United States)

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-01-01

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument. PMID:25184490

  6. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  7. The AMchip: A VLSI associative memory for track finding

    International Nuclear Information System (INIS)

    Morsani, F.; Galeotti, S.; Passuello, D.; Amendolia, S.R.; Ristori, L.; Turini, N.

    1992-01-01

    An associative memory to be used for super-fast track finding in future high energy physics experiments, has been implemented on silicon as a full-custom CMOS VLSI chip (the AMchip). The first prototype has been designed and successfully tested at INFN in Pisa. It is implemented in 1.6 μm, double metal, silicon gate CMOS technology and contains about 140 000 MOS transistors on a 1x1 cm 2 silicon chip. (orig.)

  8. Drift chamber tracking with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-10-01

    We have tested a commercial analog VLSI neural network chip for finding in real time the intercept and slope of charged particles traversing a drift chamber. Voltages proportional to the drift times were input to the Intel ETANN chip and the outputs were recorded and later compared off line to conventional track fits. We will discuss the chamber and test setup, the chip specifications, and results of recent tests. We'll briefly discuss possible applications in high energy physics detector triggers

  9. Using Software Technology to Specify Abstract Interfaces in VLSI Design.

    Science.gov (United States)

    1985-01-01

    with the complexity lev- els inherent in VLSI design, in that they can capitalize on their foundations in discrete mathemat- ics and the theory of...basis, rather than globally. Such a partitioning of module semantics makes the specification easier to construct and verify intelectual !y; it also...access function definitions. A standard language improves executability characteristics by capitalizing on portable, optimized system software developed

  10. High density processing electronics for superconducting tunnel junction x-ray detector arrays

    Energy Technology Data Exchange (ETDEWEB)

    Warburton, W.K., E-mail: bill@xia.com [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Harris, J.T. [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Friedrich, S. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)

    2015-06-01

    Superconducting tunnel junctions (STJs) are excellent soft x-ray (100–2000 eV) detectors, particularly for synchrotron applications, because of their ability to obtain energy resolutions below 10 eV at count rates approaching 10 kcps. In order to achieve useful solid detection angles with these very small detectors, they are typically deployed in large arrays – currently with 100+ elements, but with 1000 elements being contemplated. In this paper we review a 5-year effort to develop compact, computer controlled low-noise processing electronics for STJ detector arrays, focusing on the major issues encountered and our solutions to them. Of particular interest are our preamplifier design, which can set the STJ operating points under computer control and achieve 2.7 eV energy resolution; our low noise power supply, which produces only 2 nV/√Hz noise at the preamplifier's critical cascode node; our digital processing card that digitizes and digitally processes 32 channels; and an STJ I–V curve scanning algorithm that computes noise as a function of offset voltage, allowing an optimum operating point to be easily selected. With 32 preamplifiers laid out on a custom 3U EuroCard, and the 32 channel digital card in a 3U PXI card format, electronics for a 128 channel array occupy only two small chassis, each the size of a National Instruments 5-slot PXI crate, and allow full array control with simple extensions of existing beam line data collection packages.

  11. vPELS: An E-Learning Social Environment for VLSI Design with Content Security Using DRM

    Science.gov (United States)

    Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn

    2014-01-01

    This article provides a proposal for personal e-learning system (vPELS [where "v" stands for VLSI: very large scale integrated circuit])) architecture in the context of social network environment for VLSI Design. The main objective of vPELS is to develop individual skills on a specific subject--say, VLSI--and share resources with peers.…

  12. Phased arrays techniques and split spectrum processing for inspection of thick titanium casting components

    International Nuclear Information System (INIS)

    Banchet, J.; Chahbaz, A.; Sicard, R.; Zellouf, D.E.

    2003-01-01

    In aircraft structures, titanium parts and engine members are critical structural components, and their inspection crucial. However, these structures are very difficult to inspect ultrasonically because of their large grain structure that increases noise drastically. In this work, phased array inspection setups were developed to detected small defects such as simulated inclusions and porosity contained in thick titanium casting blocks, which are frequently used in the aerospace industry. A Cut Spectrum Processing (CSP)-based algorithm was then implemented on the acquired data by employing a set of parallel bandpass filters with different center frequencies. This process led in substantial improvement of the signal to noise ratio and thus, of detectability

  13. Post-processing Free Quantum Random Number Generator Based on Avalanche Photodiode Array

    International Nuclear Information System (INIS)

    Li Yang; Liao Sheng-Kai; Liang Fu-Tian; Shen Qi; Liang Hao; Peng Cheng-Zhi

    2016-01-01

    Quantum random number generators adopting single photon detection have been restricted due to the non-negligible dead time of avalanche photodiodes (APDs). We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32 × 32 APD array is up to tens of Gbits/s. (paper)

  14. Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes

    Science.gov (United States)

    Huang, Shaoming

    2003-06-01

    An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.

  15. Fabrication of Aligned Polyaniline Nanofiber Array via a Facile Wet Chemical Process.

    Science.gov (United States)

    Sun, Qunhui; Bi, Wu; Fuller, Thomas F; Ding, Yong; Deng, Yulin

    2009-06-17

    In this work, we demonstrate for the first time a template free approach to synthesize aligned polyaniline nanofiber (PN) array on a passivated gold (Au) substrate via a facile wet chemical process. The Au surface was first modified using 4-aminothiophenol (4-ATP) to afford the surface functionality, followed subsequently by an oxidation polymerization of aniline (AN) monomer in an aqueous medium using ammonium persulfate as the oxidant and tartaric acid as the doping agent. The results show that a vertically aligned PANI nanofiber array with individual fiber diameters of ca. 100 nm, heights of ca. 600 nm and a packing density of ca. 40 pieces·µm(-2) , was synthesized. Copyright © 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Arrays of surface-normal electroabsorption modulators for the generation and signal processing of microwave photonics signals

    NARCIS (Netherlands)

    Noharet, Bertrand; Wang, Qin; Platt, Duncan; Junique, Stéphane; Marpaung, D.A.I.; Roeloffzen, C.G.H.

    2011-01-01

    The development of an array of 16 surface-normal electroabsorption modulators operating at 1550nm is presented. The modulator array is dedicated to the generation and processing of microwave photonics signals, targeting a modulation bandwidth in excess of 5GHz. The hybrid integration of the

  17. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    International Nuclear Information System (INIS)

    Tu, K T; Chung, C K

    2016-01-01

    An integrated technology of CO 2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO 2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO 2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO 2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold. (paper)

  18. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    Science.gov (United States)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  19. Las Vegas is better than determinism in VLSI and distributed computing

    DEFF Research Database (Denmark)

    Mehlhorn, Kurt; Schmidt, Erik Meineche

    1982-01-01

    In this paper we describe a new method for proving lower bounds on the complexity of VLSI - computations and more generally distributed computations. Lipton and Sedgewick observed that the crossing sequence arguments used to prove lower bounds in VLSI (or TM or distributed computing) apply to (ac...

  20. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  1. Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2011-09-01

    Full Text Available Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms.

  2. An SEU analysis approach for error propagation in digital VLSI CMOS ASICs

    International Nuclear Information System (INIS)

    Baze, M.P.; Bartholet, W.G.; Dao, T.A.; Buchner, S.

    1995-01-01

    A critical issue in the development of ASIC designs is the ability to achieve first pass fabrication success. Unsuccessful fabrication runs have serious impact on ASIC costs and schedules. The ability to predict an ASICs radiation response prior to fabrication is therefore a key issue when designing ASICs for military and aerospace systems. This paper describes an analysis approach for calculating static bit error propagation in synchronous VLSI CMOS circuits developed as an aid for predicting the SEU response of ASIC's. The technique is intended for eventual application as an ASIC development simulation tool which can be used by circuit design engineers for performance evaluation during the pre-fabrication design process in much the same way that logic and timing simulators are used

  3. Radiation hardness tests with a demonstrator preamplifier circuit manufactured in silicon on sapphire (SOS) VLSI technology

    International Nuclear Information System (INIS)

    Bingefors, N.; Ekeloef, T.; Eriksson, C.; Paulsson, M.; Moerk, G.; Sjoelund, A.

    1992-01-01

    Samples of the preamplifier circuit, as well as of separate n and p channel transistors of the type contained in the circuit, were irradiated with gammas from a 60 Co source up to an integrated dose of 3 Mrad (30 kGy). The VLSI manufacturing technology used is the SOS4 process of ABB Hafo. A first analysis of the tests shows that the performance of the amplifier remains practically unaffected by the radiation for total doses up to 1 Mrad. At higher doses up to 3 Mrad the circuit amplification factor decreases by a factor between 4 and 5 whereas the output noise level remains unchanged. It is argued that it may be possible to reduce the decrease in amplification factor in future by optimizing the amplifier circuit design further. (orig.)

  4. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    Science.gov (United States)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  5. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    International Nuclear Information System (INIS)

    Koohbor, M.; Soltanian, S.; Najafi, M.; Servati, P.

    2012-01-01

    Highlights: ► Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. ► Increasing the Zn concentration significantly reduces the Hc value of NWs. ► Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. ► The pH of electrolyte has no significant effect on the properties of the NW arrays. ► Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co 1−x Zn x (0 ≤ x ≤ 0.74) nanowires (NWs) with diameters of ∼35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 °C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on the magnetic properties of the NW arrays. The changes in magnetic property of NWs are rooted in a competition between shape anisotropy and

  6. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    Energy Technology Data Exchange (ETDEWEB)

    Koohbor, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Soltanian, S., E-mail: s.soltanian@gmail.com [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada); Najafi, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Physics, Hamadan University of Technology, Hamadan (Iran, Islamic Republic of); Servati, P. [Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada)

    2012-01-05

    Highlights: Black-Right-Pointing-Pointer Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. Black-Right-Pointing-Pointer Increasing the Zn concentration significantly reduces the Hc value of NWs. Black-Right-Pointing-Pointer Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. Black-Right-Pointing-Pointer The pH of electrolyte has no significant effect on the properties of the NW arrays. Black-Right-Pointing-Pointer Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co{sub 1-x}Zn{sub x} (0 {<=} x {<=} 0.74) nanowires (NWs) with diameters of {approx}35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 Degree-Sign C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on

  7. High-throughput fabrication of micrometer-sized compound parabolic mirror arrays by using parallel laser direct-write processing

    International Nuclear Information System (INIS)

    Yan, Wensheng; Gu, Min; Cumming, Benjamin P

    2015-01-01

    Micrometer-sized parabolic mirror arrays have significant applications in both light emitting diodes and solar cells. However, low fabrication throughput has been identified as major obstacle for the mirror arrays towards large-scale applications due to the serial nature of the conventional method. Here, the mirror arrays are fabricated by using a parallel laser direct-write processing, which addresses this barrier. In addition, it is demonstrated that the parallel writing is able to fabricate complex arrays besides simple arrays and thus offers wider applications. Optical measurements show that each single mirror confines the full-width at half-maximum value to as small as 17.8 μm at the height of 150 μm whilst providing a transmittance of up to 68.3% at a wavelength of 633 nm in good agreement with the calculation values. (paper)

  8. Point DCT VLSI Architecture for Emerging HEVC Standard

    OpenAIRE

    Ahmed, Ashfaq; Shahid, Muhammad Usman; Rehman, Ata ur

    2012-01-01

    This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4 × 4 up to 3 2 × 3 2 , the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into ...

  9. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  10. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  11. Fabricating process of hollow out-of-plane Ni microneedle arrays and properties of the integrated microfluidic device

    Science.gov (United States)

    Zhu, Jun; Cao, Ying; Wang, Hong; Li, Yigui; Chen, Xiang; Chen, Di

    2013-07-01

    Although microfluidic devices that integrate microfluidic chips with hollow out-of-plane microneedle arrays have many advantages in transdermal drug delivery applications, difficulties exist in their fabrication due to the special three-dimensional structures of hollow out-of-plane microneedles. A new, cost-effective process for the fabrication of a hollow out-of-plane Ni microneedle array is presented. The integration of PDMS microchips with the Ni hollow microneedle array and the properties of microfluidic devices are also presented. The integrated microfluidic devices provide a new approach for transdermal drug delivery.

  12. VLSI architecture and design for the Fermat Number Transform implementation

    Energy Technology Data Exchange (ETDEWEB)

    Pajayakrit, A.

    1987-01-01

    A new technique of sectioning a pipelined transformer, using the Fermat Number Transform (FNT), is introduced. Also, a novel VLSI design which overcomes the problems of implementing FNTs, for use in fast convolution/correlation, is described. The design comprises one complete section of a pipelined transformer and may be programmed to function at any point in a forward or inverse pipeline, so allowing the construction of a pipelined convolver or correlator using identical chips, thus the favorable properties of the transform can be exploited. This overcomes the difficulty of fitting a complete pipeline onto one chip without resorting to the use of several different designs. The implementation of high-speed convolver/correlator using the VLSI chips has been successfully developed and tested. For impulse response lengths of up to 16 points the sampling rates of 0.5 MHz can be achieved. Finally, the filter speed performance using the FNT chips is compared to other designs and conclusions drawn on the merits of the FNT for this application. Also, the advantages and limitations of the FNT are analyzed, with respect to the more conventional FFT, and the results are provided.

  13. Assimilation of Biophysical Neuronal Dynamics in Neuromorphic VLSI.

    Science.gov (United States)

    Wang, Jun; Breen, Daniel; Akinin, Abraham; Broccard, Frederic; Abarbanel, Henry D I; Cauwenberghs, Gert

    2017-12-01

    Representing the biophysics of neuronal dynamics and behavior offers a principled analysis-by-synthesis approach toward understanding mechanisms of nervous system functions. We report on a set of procedures assimilating and emulating neurobiological data on a neuromorphic very large scale integrated (VLSI) circuit. The analog VLSI chip, NeuroDyn, features 384 digitally programmable parameters specifying for 4 generalized Hodgkin-Huxley neurons coupled through 12 conductance-based chemical synapses. The parameters also describe reversal potentials, maximal conductances, and spline regressed kinetic functions for ion channel gating variables. In one set of experiments, we assimilated membrane potential recorded from one of the neurons on the chip to the model structure upon which NeuroDyn was designed using the known current input sequence. We arrived at the programmed parameters except for model errors due to analog imperfections in the chip fabrication. In a related set of experiments, we replicated songbird individual neuron dynamics on NeuroDyn by estimating and configuring parameters extracted using data assimilation from intracellular neural recordings. Faithful emulation of detailed biophysical neural dynamics will enable the use of NeuroDyn as a tool to probe electrical and molecular properties of functional neural circuits. Neuroscience applications include studying the relationship between molecular properties of neurons and the emergence of different spike patterns or different brain behaviors. Clinical applications include studying and predicting effects of neuromodulators or neurodegenerative diseases on ion channel kinetics.

  14. Development of Radhard VLSI electronics for SSC calorimeters

    International Nuclear Information System (INIS)

    Dawson, J.W.; Nodulman, L.J.

    1989-01-01

    A new program of development of integrated electronics for liquid argon calorimeters in the SSC detector environment is being started at Argonne National Laboratory. Scientists from Brookhaven National Laboratory and Vanderbilt University together with an industrial participants are expected to collaborate in this work. Interaction rates, segmentation, and the radiation environment dictate that front-end electronics of SSC calorimeters must be implemented in the form of highly integrated, radhard, analog, low noise, VLSI custom monolithic devices. Important considerations are power dissipation, choice of functions integrated on the front-end chips, and cabling requirements. An extensive level of expertise in radhard electronics exists within the industrial community, and a primary objective of this work is to bring that expertise to bear on the problems of SSC detector design. Radiation hardness measurements and requirements as well as calorimeter design will be primarily the responsibility of Argonne scientists and our Brookhaven and Vanderbilt colleagues. Radhard VLSI design and fabrication will be primarily the industrial participant's responsibility. The rapid-cycling synchrotron at Argonne will be used for radiation damage studies involving response to neutrons and charged particles, while damage from gammas will be investigated at Brookhaven. 10 refs., 6 figs., 2 tabs

  15. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    Science.gov (United States)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  16. A pipelined architecture for real time correction of non-uniformity in infrared focal plane arrays imaging system using multiprocessors

    Science.gov (United States)

    Zou, Liang; Fu, Zhuang; Zhao, YanZheng; Yang, JunYan

    2010-07-01

    This paper proposes a kind of pipelined electric circuit architecture implemented in FPGA, a very large scale integrated circuit (VLSI), which efficiently deals with the real time non-uniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPA). Dual Nios II soft-core processors and a DSP with a 64+ core together constitute this image system. Each processor undertakes own systematic task, coordinating its work with each other's. The system on programmable chip (SOPC) in FPGA works steadily under the global clock frequency of 96Mhz. Adequate time allowance makes FPGA perform NUC image pre-processing algorithm with ease, which has offered favorable guarantee for the work of post image processing in DSP. And at the meantime, this paper presents a hardware (HW) and software (SW) co-design in FPGA. Thus, this systematic architecture yields an image processing system with multiprocessor, and a smart solution to the satisfaction with the performance of the system.

  17. Compressive sensing-based electrostatic sensor array signal processing and exhausted abnormal debris detecting

    Science.gov (United States)

    Tang, Xin; Chen, Zhongsheng; Li, Yue; Yang, Yongmin

    2018-05-01

    When faults happen at gas path components of gas turbines, some sparsely-distributed and charged debris will be generated and released into the exhaust gas. The debris is called abnormal debris. Electrostatic sensors can detect the debris online and further indicate the faults. It is generally considered that, under a specific working condition, a more serious fault generates more and larger debris, and a piece of larger debris carries more charge. Therefore, the amount and charge of the abnormal debris are important indicators of the fault severity. However, because an electrostatic sensor can only detect the superposed effect on the electrostatic field of all the debris, it can hardly identify the amount and position of the debris. Moreover, because signals of electrostatic sensors depend on not only charge but also position of debris, and the position information is difficult to acquire, measuring debris charge accurately using the electrostatic detecting method is still a technical difficulty. To solve these problems, a hemisphere-shaped electrostatic sensors' circular array (HSESCA) is used, and an array signal processing method based on compressive sensing (CS) is proposed in this paper. To research in a theoretical framework of CS, the measurement model of the HSESCA is discretized into a sparse representation form by meshing. In this way, the amount and charge of the abnormal debris are described as a sparse vector. It is further reconstructed by constraining l1-norm when solving an underdetermined equation. In addition, a pre-processing method based on singular value decomposition and a result calibration method based on weighted-centroid algorithm are applied to ensure the accuracy of the reconstruction. The proposed method is validated by both numerical simulations and experiments. Reconstruction errors, characteristics of the results and some related factors are discussed.

  18. Remote online process measurements by a fiber optic diode array spectrometer

    International Nuclear Information System (INIS)

    Van Hare, D.R.; Prather, W.S.; O'Rourke, P.E.

    1986-01-01

    The development of remote online monitors for radioactive process streams is an active research area at the Savannah River Laboratory (SRL). A remote offline spectrophotometric measurement system has been developed and used at the Savannah River Plant (SRP) for the past year to determine the plutonium concentration of process solution samples. The system consists of a commercial diode array spectrophotometer modified with fiber optic cables that allow the instrument to be located remotely from the measurement cell. Recently, a fiber optic multiplexer has been developed for this instrument, which allows online monitoring of five locations sequentially. The multiplexer uses a motorized micrometer to drive one of five sets of optical fibers into the optical path of the instrument. A sixth optical fiber is used as an external reference and eliminates the need to flush out process lines to re-reference the spectrophotometer. The fiber optic multiplexer has been installed in a process prototype facility to monitor uranium loading and breakthrough of ion exchange columns. The design of the fiber optic multiplexer is discussed and data from the prototype facility are presented to demonstrate the capabilities of the measurement system

  19. VLSI implementation of an AMDF pitch detector

    OpenAIRE

    Smith, Tony; Gittel, Falko; Schwarzbacher, Andreas; Hilt, E.; Timoney, Joseph

    2003-01-01

    Pitch detectors are used in a variety of speech processing applications such as speech recognition systems where the pitch of the speaker is used as one parameter for identification purposes. Furthermore, pitch detectors are also sued with adaptive filters to achieve high quality adaptive noise cancellation of speech signals. In voice conversion systems, pitch detection is an essential step since the pitch of the modified signal is altered to model the target voice. This paper describes a ...

  20. VLSI for High-Speed Digital Signal Processing

    Science.gov (United States)

    1994-09-30

    particular, the design, layout and fab - rication of integrated circuits. The primary project for this grant has been the design and implementation of a...targeted at 33.36 dB, and PSNR (dB) Rate ( bpp ) the FRSBC algorithm, targeted at 0.5 bits/pixel, respec- Filter FDSBC FRSBC FDSBC FRSBC tively. The filter...to mean square error d by as shown in Fig. 6, is used, yielding a total of 16 subbands. 255’ The rates, in bits per pixel ( bpp ), and the peak signal

  1. Continuous catchment-scale monitoring of geomorphic processes with a 2-D seismological array

    Science.gov (United States)

    Burtin, A.; Hovius, N.; Milodowski, D.; Chen, Y.-G.; Wu, Y.-M.; Lin, C.-W.; Chen, H.

    2012-04-01

    The monitoring of geomorphic processes during extreme climatic events is of a primary interest to estimate their impact on the landscape dynamics. However, available techniques to survey the surface activity do not provide a relevant time and/or space resolution. Furthermore, these methods hardly investigate the dynamics of the events since their detection are made a posteriori. To increase our knowledge of the landscape evolution and the influence of extreme climatic events on a catchment dynamics, we need to develop new tools and procedures. In many past works, it has been shown that seismic signals are relevant to detect and locate surface processes (landslides, debris flows). During the 2010 typhoon season, we deployed a network of 12 seismometers dedicated to monitor the surface processes of the Chenyoulan catchment in Taiwan. We test the ability of a two dimensional array and small inter-stations distances (~ 11 km) to map in continuous and at a catchment-scale the geomorphic activity. The spectral analysis of continuous records shows a high-frequency (> 1 Hz) seismic energy that is coherent with the occurrence of hillslope and river processes. Using a basic detection algorithm and a location approach running on the analysis of seismic amplitudes, we manage to locate the catchment activity. We mainly observe short-time events (> 300 occurrences) associated with debris falls and bank collapses during daily convective storms, where 69% of occurrences are coherent with the time distribution of precipitations. We also identify a couple of debris flows during a large tropical storm. In contrast, the FORMOSAT imagery does not detect any activity, which somehow reflects the lack of extreme climatic conditions during the experiment. However, high resolution pictures confirm the existence of links between most of geomorphic events and existing structures (landslide scars, gullies...). We thus conclude to an activity that is dominated by reactivation processes. It

  2. X-ray imager using solution processed organic transistor arrays and bulk heterojunction photodiodes on thin, flexible plastic substrate

    NARCIS (Netherlands)

    Gelinck, G.H.; Kumar, A.; Moet, D.; Steen, J.L. van der; Shafique, U.; Malinowski, P.E.; Myny, K.; Rand, B.P.; Simon, M.; Rütten, W.; Douglas, A.; Jorritsma, J.; Heremans, P.L.; Andriessen, H.A.J.M.

    2013-01-01

    We describe the fabrication and characterization of large-area active-matrix X-ray/photodetector array of high quality using organic photodiodes and organic transistors. All layers with the exception of the electrodes are solution processed. Because it is processed on a very thin plastic substrate

  3. Application of evolutionary algorithms for multi-objective optimization in VLSI and embedded systems

    CERN Document Server

    2015-01-01

    This book describes how evolutionary algorithms (EA), including genetic algorithms (GA) and particle swarm optimization (PSO) can be utilized for solving multi-objective optimization problems in the area of embedded and VLSI system design. Many complex engineering optimization problems can be modelled as multi-objective formulations. This book provides an introduction to multi-objective optimization using meta-heuristic algorithms, GA and PSO, and how they can be applied to problems like hardware/software partitioning in embedded systems, circuit partitioning in VLSI, design of operational amplifiers in analog VLSI, design space exploration in high-level synthesis, delay fault testing in VLSI testing, and scheduling in heterogeneous distributed systems. It is shown how, in each case, the various aspects of the EA, namely its representation, and operators like crossover, mutation, etc. can be separately formulated to solve these problems. This book is intended for design engineers and researchers in the field ...

  4. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  5. Cas4-Dependent Prespacer Processing Ensures High-Fidelity Programming of CRISPR Arrays.

    Science.gov (United States)

    Lee, Hayun; Zhou, Yi; Taylor, David W; Sashital, Dipali G

    2018-04-05

    CRISPR-Cas immune systems integrate short segments of foreign DNA as spacers into the host CRISPR locus to provide molecular memory of infection. Cas4 proteins are widespread in CRISPR-Cas systems and are thought to participate in spacer acquisition, although their exact function remains unknown. Here we show that Bacillus halodurans type I-C Cas4 is required for efficient prespacer processing prior to Cas1-Cas2-mediated integration. Cas4 interacts tightly with the Cas1 integrase, forming a heterohexameric complex containing two Cas1 dimers and two Cas4 subunits. In the presence of Cas1 and Cas2, Cas4 processes double-stranded substrates with long 3' overhangs through site-specific endonucleolytic cleavage. Cas4 recognizes PAM sequences within the prespacer and prevents integration of unprocessed prespacers, ensuring that only functional spacers will be integrated into the CRISPR array. Our results reveal the critical role of Cas4 in maintaining fidelity during CRISPR adaptation, providing a structural and mechanistic model for prespacer processing and integration. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    Science.gov (United States)

    2007-03-31

    IFinal 03/01/04 - 02/28/07 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Neuromorphic VLSI-based Bat Echolocation for Micro-aerial 5b.GRANTNUMBER Vehicle...uncovered interesting new issues in our choice for representing the intensity of signals. We have just finished testing the first chip version of an echo...timing-based algorithm (’openspace’) for sonar-guided navigation amidst multiple obstacles. 15. SUBJECT TERMS Neuromorphic VLSI, bat echolocation

  7. VLSI Architectures for the Multiplication of Integers Modulo a Fermat Number

    Science.gov (United States)

    Chang, J. J.; Truong, T. K.; Reed, I. S.; Hsu, I. S.

    1984-01-01

    Multiplication is central in the implementation of Fermat number transforms and other residue number algorithms. There is need for a good multiplication algorithm that can be realized easily on a very large scale integration (VLSI) chip. The Leibowitz multiplier is modified to realize multiplication in the ring of integers modulo a Fermat number. This new algorithm requires only a sequence of cyclic shifts and additions. The designs developed for this new multiplier are regular, simple, expandable, and, therefore, suitable for VLSI implementation.

  8. Magnetic alloy nanowire arrays with different lengths: Insights into the crossover angle of magnetization reversal process

    Energy Technology Data Exchange (ETDEWEB)

    Samanifar, S.; Alikhani, M. [Department of Physics, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Almasi Kashi, M., E-mail: almac@kashanu.ac.ir [Department of Physics, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Institute of Nanoscience and Nanotechnology, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Ramazani, A. [Department of Physics, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Institute of Nanoscience and Nanotechnology, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Montazer, A.H. [Institute of Nanoscience and Nanotechnology, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of)

    2017-05-15

    Nanoscale magnetic alloy wires are being actively investigated, providing fundamental insights into tuning properties in magnetic data storage and processing technologies. However, previous studies give trivial information about the crossover angle of magnetization reversal process in alloy nanowires (NWs). Here, magnetic alloy NW arrays with different compositions, composed of Fe, Co and Ni have been electrochemically deposited into hard-anodic aluminum oxide templates with a pore diameter of approximately 150 nm. Under optimized conditions of alumina barrier layer and deposition bath concentrations, the resulting alloy NWs with aspect ratio and saturation magnetization (M{sub s}) up to 550 and 1900 emu cm{sup −3}, respectively, are systematically investigated in terms of composition, crystalline structure and magnetic properties. Using angular dependence of coercivity extracted from hysteresis loops, the reversal processes are evaluated, indicating non-monotonic behavior. The crossover angle (θ{sub c}) is found to depend on NW length and M{sub s}. At a constant M{sub s}, increasing NW length decreases θ{sub c}, thereby decreasing the involvement of vortex mode during the magnetization reversal process. On the other hand, decreasing M{sub s} decreases θ{sub c} in large aspect ratio (>300) alloy NWs. Phenomenologically, it is newly found that increasing Ni content in the composition decreases θ{sub c}. The angular first-order reversal curve (AFORC) measurements including the irreversibility of magnetization are also investigated to gain a more detailed insight into θ{sub c}. - Highlights: • Magnetic alloy NWs with aspect ratios up to 550 were fabricated into hard-AAO templates. • Morphology, composition, crystal structure and magnetic properties were investigated. • Angular dependence of coercivity was used to describe the magnetization reversal process. • The crossover angle of magnetization reversal was found to depend on NW length and M{sub s}.

  9. Process Development of Gallium Nitride Phosphide Core-Shell Nanowire Array Solar Cell

    Science.gov (United States)

    Chuang, Chen

    Dilute Nitride GaNP is a promising materials for opto-electronic applications due to its band gap tunability. The efficiency of GaNxP1-x /GaNyP1-y core-shell nanowire solar cell (NWSC) is expected to reach as high as 44% by 1% N and 9% N in the core and shell, respectively. By developing such high efficiency NWSCs on silicon substrate, a further reduction of the cost of solar photovoltaic can be further reduced to 61$/MWh, which is competitive to levelized cost of electricity (LCOE) of fossil fuels. Therefore, a suitable NWSC structure and fabrication process need to be developed to achieve this promising NWSC. This thesis is devoted to the study on the development of fabrication process of GaNxP 1-x/GaNyP1-y core-shell Nanowire solar cell. The thesis is divided into two major parts. In the first parts, previously grown GaP/GaNyP1-y core-shell nanowire samples are used to develop the fabrication process of Gallium Nitride Phosphide nanowire solar cell. The design for nanowire arrays, passivation layer, polymeric filler spacer, transparent col- lecting layer and metal contact are discussed and fabricated. The property of these NWSCs are also characterized to point out the future development of Gal- lium Nitride Phosphide NWSC. In the second part, a nano-hole template made by nanosphere lithography is studied for selective area growth of nanowires to improve the structure of core-shell NWSC. The fabrication process of nano-hole templates and the results are presented. To have a consistent features of nano-hole tem- plate, the Taguchi Method is used to optimize the fabrication process of nano-hole templates.

  10. Global floor planning approach for VLSI design

    International Nuclear Information System (INIS)

    LaPotin, D.P.

    1986-01-01

    Within a hierarchical design environment, initial decisions regarding the partitioning and choice of module attributes greatly impact the quality of the resulting IC in terms of area and electrical performance. This dissertation presents a global floor-planning approach which allows designers to quickly explore layout issues during the initial stages of the IC design process. In contrast to previous efforts, which address the floor-planning problem from a strict module placement point of view, this approach considers floor-planning from an area planning point of view. The approach is based upon a combined min-cut and slicing paradigm, which ensures routability. To provide flexibility, modules may be specified as having a number of possible dimensions and orientations, and I/O pads as well as layout constraints are considered. A slicing-tree representation is employed, upon which a sequence of traversal operations are applied in order to obtain an area efficient layout. An in-place partitioning technique, which provides an improvement over previous min-cut and slicing-based efforts, is discussed. Global routing and module I/O pin assignment are provided for floor-plan evaluation purposes. A computer program, called Mason, has been developed which efficiently implements the approach and provides an interactive environment for designers to perform floor-planning. Performance of this program is illustrated via several industrial examples

  11. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    Science.gov (United States)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  12. Low-Noise CMOS Circuits for On-Chip Signal Processing in Focal-Plane Arrays

    Science.gov (United States)

    Pain, Bedabrata

    The performance of focal-plane arrays can be significantly enhanced through the use of on-chip signal processing. Novel, in-pixel, on-focal-plane, analog signal-processing circuits for high-performance imaging are presented in this thesis. The presence of a high background-radiation is a major impediment for infrared focal-plane array design. An in-pixel, background-suppression scheme, using dynamic analog current memory circuit, is described. The scheme also suppresses spatial noise that results from response non-uniformities of photo-detectors, leading to background limited infrared detector readout performance. Two new, low-power, compact, current memory circuits, optimized for operation at ultra-low current levels required in infrared-detection, are presented. The first one is a self-cascading current memory that increases the output impedance, and the second one is a novel, switch feed-through reducing current memory, implemented using error-current feedback. This circuit can operate with a residual absolute -error of less than 0.1%. The storage-time of the memory is long enough to also find applications in neural network circuits. In addition, a voltage-mode, accurate, low-offset, low-power, high-uniformity, random-access sample-and-hold cell, implemented using a CCD with feedback, is also presented for use in background-suppression and neural network applications. A new, low noise, ultra-low level signal readout technique, implemented by individually counting photo-electrons within the detection pixel, is presented. The output of each unit-cell is a digital word corresponding to the intensity of the photon flux, and the readout is noise free. This technique requires the use of unit-cell amplifiers that feature ultra-high-gain, low-power, self-biasing capability and noise in sub-electron levels. Both single-input and differential-input implementations of such amplifiers are investigated. A noise analysis technique is presented for analyzing sampled

  13. VLSI and system architecture-the new development of system 5G

    Energy Technology Data Exchange (ETDEWEB)

    Sakamura, K.; Sekino, A.; Kodaka, T.; Uehara, T.; Aiso, H.

    1982-01-01

    A research and development proposal is presented for VLSI CAD systems and for a hardware environment called system 5G on which the VLSI CAD systems run. The proposed CAD systems use a hierarchically organized design language to enable design of anything from basic architectures of VLSI to VLSI mask patterns in a uniform manner. The cad systems will eventually become intelligent cad systems that acquire design knowledge and perform automatic design of VLSI chips when the characteristic requirements of VLSI chip is given. System 5G will consist of superinference machines and the 5G communication network. The superinference machine will be built based on a functionally distributed architecture connecting inferommunication network. The superinference machine will be built based on a functionally distributed architecture connecting inference machines and relational data base machines via a high-speed local network. The transfer rate of the local network will be 100 mbps at the first stage of the project and will be improved to 1 gbps. Remote access to the superinference machine will be possible through the 5G communication network. Access to system 5G will use the 5G network architecture protocol. The users will access the system 5G using standardized 5G personal computers. 5G personal logic programming stations, very high intelligent terminals providing an instruction set that supports predicate logic and input/output facilities for audio and graphical information.

  14. PERFORMANCE OF LEAKAGE POWER MINIMIZATION TECHNIQUE FOR CMOS VLSI TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    T. Tharaneeswaran

    2012-06-01

    Full Text Available Leakage power of CMOS VLSI Technology is a great concern. To reduce leakage power in CMOS circuits, a Leakage Power Minimiza-tion Technique (LPMT is implemented in this paper. Leakage cur-rents are monitored and compared. The Comparator kicks the charge pump to give body voltage (Vbody. Simulations of these circuits are done using TSMC 0.35µm technology with various operating temper-atures. Current steering Digital-to-Analog Converter (CSDAC is used as test core to validate the idea. The Test core (eg.8-bit CSDAC had power consumption of 347.63 mW. LPMT circuit alone consumes power of 6.3405 mW. This technique results in reduction of leakage power of 8-bit CSDAC by 5.51mW and increases the reliability of test core. Mentor Graphics ELDO and EZ-wave are used for simulations.

  15. VLSI-based video event triggering for image data compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  16. Carbon nanotube based VLSI interconnects analysis and design

    CERN Document Server

    Kaushik, Brajesh Kumar

    2015-01-01

    The brief primarily focuses on the performance analysis of CNT based interconnects in current research scenario. Different CNT structures are modeled on the basis of transmission line theory. Performance comparison for different CNT structures illustrates that CNTs are more promising than Cu or other materials used in global VLSI interconnects. The brief is organized into five chapters which mainly discuss: (1) an overview of current research scenario and basics of interconnects; (2) unique crystal structures and the basics of physical properties of CNTs, and the production, purification and applications of CNTs; (3) a brief technical review, the geometry and equivalent RLC parameters for different single and bundled CNT structures; (4) a comparative analysis of crosstalk and delay for different single and bundled CNT structures; and (5) various unique mixed CNT bundle structures and their equivalent electrical models.

  17. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    NARCIS (Netherlands)

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic

  18. A novel, substrate independent three-step process for the growth of uniform ZnO nanorod arrays

    International Nuclear Information System (INIS)

    Byrne, D.; McGlynn, E.; Henry, M.O.; Kumar, K.; Hughes, G.

    2010-01-01

    We report a three-step deposition process for uniform arrays of ZnO nanorods, involving chemical bath deposition of aligned seed layers followed by nanorod nucleation sites and subsequent vapour phase transport growth of nanorods. This combines chemical bath deposition techniques, which enable substrate independent seeding and nucleation site generation with vapour phase transport growth of high crystalline and optical quality ZnO nanorod arrays. Our data indicate that the three-step process produces uniform nanorod arrays with narrow and rather monodisperse rod diameters (∼ 70 nm) across substrates of centimetre dimensions. X-ray photoelectron spectroscopy, scanning electron microscopy and X-ray diffraction were used to study the growth mechanism and characterise the nanostructures.

  19. A neuromorphic VLSI device for implementing 2-D selective attention systems.

    Science.gov (United States)

    Indiveri, G

    2001-01-01

    Selective attention is a mechanism used to sequentially select and process salient subregions of the input space, while suppressing inputs arriving from nonsalient regions. By processing small amounts of sensory information in a serial fashion, rather than attempting to process all the sensory data in parallel, this mechanism overcomes the problem of flooding limited processing capacity systems with sensory inputs. It is found in many biological systems and can be a useful engineering tool for developing artificial systems that need to process in real-time sensory data. In this paper we present a neuromorphic hardware model of a selective attention mechanism implemented on a very large scale integration (VLSI) chip, using analog circuits. The chip makes use of a spike-based representation for receiving input signals, transmitting output signals and for shifting the selection of the attended input stimulus over time. It can be interfaced to neuromorphic sensors and actuators, for implementing multichip selective attention systems. We describe the characteristics of the circuits used in the architecture and present experimental data measured from the system.

  20. Proceedings of the Adaptive Sensor Array Processing Workshop (12th) Held in Lexington, MA on 16-18 March 2004 (CD-ROM)

    National Research Council Canada - National Science Library

    James, F

    2004-01-01

    ...: The twelfth annual workshop on Adaptive Sensor Array Processing presented a diverse agenda featuring new work on adaptive methods for communications, radar and sonar, algorithmic challenges posed...

  1. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    Science.gov (United States)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  2. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    Energy Technology Data Exchange (ETDEWEB)

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-30

    primarily the result of spurious identification and incorrect association of phases, and of excessive variability in estimates for the velocity and direction of incoming seismic phases. The mitigation of these causes has led to the development of two complimentary techniques for classifying seismic sources by testing detected signals under mutually exclusive event hypotheses. Both of these techniques require appropriate calibration data from the region to be monitored, and are therefore ideally suited to mining areas or other sites with recurring seismicity. The first such technique is a classification and location algorithm where a template is designed for each site being monitored which defines which phases should be observed, and at which times, for all available regional array stations. For each phase, the variability of measurements (primarily the azimuth and apparent velocity) from previous events is examined and it is determined which processing parameters (array configuration, data window length, frequency band) provide the most stable results. This allows us to define optimal diagnostic tests for subsequent occurrences of the phase in question. The calibration of templates for this project revealed significant results with major implications for seismic processing in both automatic and analyst reviewed contexts: • one or more fixed frequency bands should be chosen for each phase tested for. • the frequency band providing the most stable parameter estimates varies from site to site and a frequency band which provides optimal measurements for one site may give substantially worse measurements for a nearby site. • slowness corrections applied depend strongly on the frequency band chosen. • the frequency band providing the most stable estimates is often neither the band providing the greatest SNR nor the band providing the best array gain. For this reason, the automatic template location estimates provided here are frequently far better than those obtained by

  3. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    International Nuclear Information System (INIS)

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-01

    primarily the result of spurious identification and incorrect association of phases, and of excessive variability in estimates for the velocity and direction of incoming seismic phases. The mitigation of these causes has led to the development of two complimentary techniques for classifying seismic sources by testing detected signals under mutually exclusive event hypotheses. Both of these techniques require appropriate calibration data from the region to be monitored, and are therefore ideally suited to mining areas or other sites with recurring seismicity. The first such technique is a classification and location algorithm where a template is designed for each site being monitored which defines which phases should be observed, and at which times, for all available regional array stations. For each phase, the variability of measurements (primarily the azimuth and apparent velocity) from previous events is examined and it is determined which processing parameters (array configuration, data window length, frequency band) provide the most stable results. This allows us to define optimal diagnostic tests for subsequent occurrences of the phase in question. The calibration of templates for this project revealed significant results with major implications for seismic processing in both automatic and analyst reviewed contexts: (1) one or more fixed frequency bands should be chosen for each phase tested for; (2) the frequency band providing the most stable parameter estimates varies from site to site and a frequency band which provides optimal measurements for one site may give substantially worse measurements for a nearby site; (3) slowness corrections applied depend strongly on the frequency band chosen; (4) the frequency band providing the most stable estimates is often neither the band providing the greatest SNR nor the band providing the best array gain. For this reason, the automatic template location estimates provided here are frequently far better than those obtained by

  4. Process-morphology scaling relations quantify self-organization in capillary densified nanofiber arrays.

    Science.gov (United States)

    Kaiser, Ashley L; Stein, Itai Y; Cui, Kehang; Wardle, Brian L

    2018-02-07

    Capillary-mediated densification is an inexpensive and versatile approach to tune the application-specific properties and packing morphology of bulk nanofiber (NF) arrays, such as aligned carbon nanotubes. While NF length governs elasto-capillary self-assembly, the geometry of cellular patterns formed by capillary densified NFs cannot be precisely predicted by existing theories. This originates from the recently quantified orders of magnitude lower than expected NF array effective axial elastic modulus (E), and here we show via parametric experimentation and modeling that E determines the width, area, and wall thickness of the resulting cellular pattern. Both experiments and models show that further tuning of the cellular pattern is possible by altering the NF-substrate adhesion strength, which could enable the broad use of this facile approach to predictably pattern NF arrays for high value applications.

  5. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    OpenAIRE

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic illumination pattern, i.e., the light pattern of a single LED, and the setting of LED illumination levels, respectively. Thereafter, we propose to use the relative mean squared error as the cost function ...

  6. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    Science.gov (United States)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  7. Emergent auditory feature tuning in a real-time neuromorphic VLSI system

    Directory of Open Access Journals (Sweden)

    Sadique eSheik

    2012-02-01

    Full Text Available Many sounds of ecological importance, such as communication calls, are characterised by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamocortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP, which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectrotemporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step towards the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  8. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System.

    Science.gov (United States)

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas; Chicca, Elisabetta

    2012-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  9. Processing And Display Of Medical Three Dimensional Arrays Of Numerical Data Using Octree Encoding

    Science.gov (United States)

    Amans, Jean-Louis; Darier, Pierre

    1986-05-01

    imaging modalities such as X-Ray computerized Tomography (CT), Nuclear Medecine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging.

  10. Mathematical analysis of the real time array PCR (RTA PCR) process

    NARCIS (Netherlands)

    Dijksman, Johan Frederik; Pierik, A.

    2012-01-01

    Real time array PCR (RTA PCR) is a recently developed biochemical technique that measures amplification curves (like with quantitative real time Polymerase Chain Reaction (qRT PCR)) of a multitude of different templates in a sample. It combines two different methods in order to profit from the

  11. An analog VLSI chip emulating polarization vision of Octopus retina.

    Science.gov (United States)

    Momeni, Massoud; Titus, Albert H

    2006-01-01

    Biological systems provide a wealth of information which form the basis for human-made artificial systems. In this work, the visual system of Octopus is investigated and its polarization sensitivity mimicked. While in actual Octopus retina, polarization vision is mainly based on the orthogonal arrangement of its photoreceptors, our implementation uses a birefringent micropolarizer made of YVO4 and mounted on a CMOS chip with neuromorphic circuitry to process linearly polarized light. Arranged in an 8 x 5 array with two photodiodes per pixel, each consuming typically 10 microW, this circuitry mimics both the functionality of individual Octopus retina cells by computing the state of polarization and the interconnection of these cells through a bias-controllable resistive network.

  12. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-10

    circuit boards. A computational electromagnetics software package, FEKO [24], is used to model the antenna arrays, and the RMIM [12] is used to...Symposium on Intelligent Signal Processing and Communications Systems, Chengdu, China, 2010. [24] FEKO Suite 6.3, EM Software & Systems- S.A. (Pty) Ltd...including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services , Directorate for Information Operations and

  13. An area-efficient topology for VLSI implementation of Viterbi decoders and other shuffle-exchange type structures

    DEFF Research Database (Denmark)

    Sparsø, Jens; Jørgensen, Henrik Nordtorp; Paaske, Erik

    1991-01-01

    A topology for single-chip implementation of computing structures based on shuffle-exchange (SE)-type interconnection networks is presented. The topology is suited for structures with a small number of processing elements (i.e. 32-128) whose area cannot be neglected compared to the area required....... The topology has been used in a VLSI implementation of the add-compare-select (ACS) module of a fully parallel K=7, R=1/2 Viterbi decoder. Both the floor-planning issues and some of the important algorithm and circuit-level aspects of this design are discussed. The chip has been designed and fabricated in a 2....... The interconnection network occupies 32% of the area.>...

  14. Real-time data acquisition and parallel data processing solution for TJ-II Bolometer arrays diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain)]. E-mail: eduardo.barrera@upm.es; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Lopez, S. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Machon, D. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain); Ochando, M. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain)

    2006-07-15

    Maps of local plasma emissivity of TJ-II plasmas are determined using three-array cameras of silicon photodiodes (AXUV type from IRD). They have assigned the top and side ports of the same sector of the vacuum vessel. Each array consists of 20 unfiltered detectors. The signals from each of these detectors are the inputs to an iterative algorithm of tomographic reconstruction. Currently, these signals are acquired by a PXI standard system at approximately 50 kS/s, with 12 bits of resolution and are stored for off-line processing. A 0.5 s discharge generates 3 Mbytes of raw data. The algorithm's load exceeds the CPU capacity of the PXI system's controller in a continuous mode, making unfeasible to process the samples in parallel with their acquisition in a PXI standard system. A new architecture model has been developed, making possible to add one or several processing cards to a standard PXI system. With this model, it is possible to define how to distribute, in real-time, the data from all acquired signals in the system among the processing cards and the PXI controller. This way, by distributing the data processing among the system controller and two processing cards, the data processing can be done in parallel with the acquisition. Hence, this system configuration would be able to measure even in long pulse devices.

  15. Process Development And Simulation For Cold Fabrication Of Doubly Curved Metal Plate By Using Line Array Roll Set

    International Nuclear Information System (INIS)

    Shim, D. S.; Jung, C. G.; Seong, D. Y.; Yang, D. Y.; Han, J. M.; Han, M. S.

    2007-01-01

    For effective manufacturing of a doubly curved sheet metal, a novel sheet metal forming process is proposed. The suggested process uses a Line Array Roll Set (LARS) composed of a pair of upper and lower roll assemblies in a symmetric manner. The process offers flexibility as compared with the conventional manufacturing processes, because it does not require any complex-shaped die and loss of material by blank-holding is minimized. LARS allows flexibility of the incremental forming process and adopts the principle of bending deformation, resulting in a slight deformation in thickness. Rolls composed of line array roll sets are divided into a driving roll row and two idle roll rows. The arrayed rolls in the central lines of the upper and lower roll assemblies are motor-driven so that they deform and transfer the sheet metal using friction between the rolls and the sheet metal. The remaining rolls are idle rolls, generating bending deformation with driving rolls. Furthermore, all the rolls are movable in any direction so that they are adaptable to any size or shape of the desired three-dimensional configuration. In the process, the sheet is deformed incrementally as deformation proceeds simultaneously in rolling and transverse directions step by step. Consequently, it can be applied to the fabrication of doubly curved ship hull plates by undergoing several passes. In this work, FEM simulations are carried out for verification of the proposed incremental forming system using the chosen design parameters. Based on the results of the simulation, the relationship between the roll set configuration and the curvature of a sheet metal is determined. The process information such as the forming loads and torques acting on every roll is analyzed as important data for the design and development of the manufacturing system

  16. Synthesis of on-chip control circuits for mVLSI biochips

    DEFF Research Database (Denmark)

    Potluri, Seetal; Schneider, Alexander Rüdiger; Hørslev-Petersen, Martin

    2017-01-01

    them to laboratory environments. To address this issue, researchers have proposed methods to reduce the number of offchip pressure sources, through integration of on-chip pneumatic control logic circuits fabricated using three-layer monolithic membrane valve technology. Traditionally, mVLSI biochip......-chip control circuit design and (iii) the integration of on-chip control in the placement and routing design tasks. In this paper we present a design methodology for logic synthesis and physical synthesis of mVLSI biochips that use on-chip control. We show how the proposed methodology can be successfully...... applied to generate biochip layouts with integrated on-chip pneumatic control....

  17. The GLUEchip: A custom VLSI chip for detectors readout and associative memories circuits

    International Nuclear Information System (INIS)

    Amendolia, S.R.; Galeotti, S.; Morsani, F.; Passuello, D.; Ristori, L.; Turini, N.

    1993-01-01

    An associative memory full-custom VLSI chip for pattern recognition has been designed and tested in the past years. It's the AMchip, that contains 128 patterns of 60 bits each. To expand the pattern capacity of an Associative Memory bank, the custom VLSI GLUEchip has been developed. The GLUEchip allows the interconnection of up to 16 AMchips or up to 16 GLUEchips: the resulting tree-like structure works like a single AMchip with an output pipelined structure and a pattern capacity increased by a factor 16 for each GLUEchip used

  18. Digital VLSI design with Verilog a textbook from Silicon Valley Technical Institute

    CERN Document Server

    Williams, John

    2008-01-01

    This unique textbook is structured as a step-by-step course of study along the lines of a VLSI IC design project. In a nominal schedule of 12 weeks, two days and about 10 hours per week, the entire verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer - deserializer, including synthesizable PLLs. Digital VLSI Design With Verilog is all an engineer needs for in-depth understanding of the verilog language: Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided on the

  19. Fabrication process for CMUT arrays with polysilicon electrodes, nanometre precision cavity gaps and through-silicon vias

    International Nuclear Information System (INIS)

    Due-Hansen, J; Poppe, E; Summanwar, A; Jensen, G U; Breivik, L; Wang, D T; Schjølberg-Henriksen, K; Midtbø, K

    2012-01-01

    Capacitive micromachined ultrasound transducers (CMUTs) can be used to realize miniature ultrasound probes. Through-silicon vias (TSVs) allow for close integration of the CMUT and read-out electronics. A fabrication process enabling the realization of a CMUT array with TSVs is being developed. The integrated process requires the formation of highly doped polysilicon electrodes with low surface roughness. A process for polysilicon film deposition, doping, CMP, RIE and thermal annealing that resulted in a film with sheet resistance of 4.0 Ω/□ and a surface roughness of 1 nm rms has been developed. The surface roughness of the polysilicon film was found to increase with higher phosphorus concentrations. The surface roughness also increased when oxygen was present in the thermal annealing ambient. The RIE process for etching CMUT cavities in the doped polysilicon gave a mean etch depth of 59.2 ± 3.9 nm and a uniformity across the wafer ranging from 1.0 to 4.7%. The two presented processes are key processes that enable the fabrication of CMUT arrays suitable for applications in for instance intravascular cardiology and gastrointestinal imaging. (paper)

  20. Conversion of electromagnetic energy in Z-pinch process of single planar wire arrays at 1.5 MA

    International Nuclear Information System (INIS)

    Liangping, Wang; Mo, Li; Juanjuan, Han; Ning, Guo; Jian, Wu; Aici, Qiu

    2014-01-01

    The electromagnetic energy conversion in the Z-pinch process of single planar wire arrays was studied on Qiangguang generator (1.5 MA, 100 ns). Electrical diagnostics were established to monitor the voltage of the cathode-anode gap and the load current for calculating the electromagnetic energy. Lumped-element circuit model of wire arrays was employed to analyze the electromagnetic energy conversion. Inductance as well as resistance of a wire array during the Z-pinch process was also investigated. Experimental data indicate that the electromagnetic energy is mainly converted to magnetic energy and kinetic energy and ohmic heating energy can be neglected before the final stagnation. The kinetic energy can be responsible for the x-ray radiation before the peak power. After the stagnation, the electromagnetic energy coupled by the load continues increasing and the resistance of the load achieves its maximum of 0.6–1.0 Ω in about 10–20 ns

  1. Optimal control of stretching process of flexible solar arrays on spacecraft based on a hybrid optimization strategy

    Directory of Open Access Journals (Sweden)

    Qijia Yao

    2017-07-01

    Full Text Available The optimal control of multibody spacecraft during the stretching process of solar arrays is investigated, and a hybrid optimization strategy based on Gauss pseudospectral method (GPM and direct shooting method (DSM is presented. First, the elastic deformation of flexible solar arrays was described approximately by the assumed mode method, and a dynamic model was established by the second Lagrangian equation. Then, the nonholonomic motion planning problem is transformed into a nonlinear programming problem by using GPM. By giving fewer LG points, initial values of the state variables and control variables were obtained. A serial optimization framework was adopted to obtain the approximate optimal solution from a feasible solution. Finally, the control variables were discretized at LG points, and the precise optimal control inputs were obtained by DSM. The optimal trajectory of the system can be obtained through numerical integration. Through numerical simulation, the stretching process of solar arrays is stable with no detours, and the control inputs match the various constraints of actual conditions. The results indicate that the method is effective with good robustness. Keywords: Motion planning, Multibody spacecraft, Optimal control, Gauss pseudospectral method, Direct shooting method

  2. Fully Integrated Linear Single Photon Avalanche Diode (SPAD) Array with Parallel Readout Circuit in a Standard 180 nm CMOS Process

    Science.gov (United States)

    Isaak, S.; Bull, S.; Pitter, M. C.; Harrison, Ian.

    2011-05-01

    This paper reports on the development of a SPAD device and its subsequent use in an actively quenched single photon counting imaging system, and was fabricated in a UMC 0.18 μm CMOS process. A low-doped p- guard ring (t-well layer) encircling the active area to prevent the premature reverse breakdown. The array is a 16×1 parallel output SPAD array, which comprises of an active quenched SPAD circuit in each pixel with the current value being set by an external resistor RRef = 300 kΩ. The SPAD I-V response, ID was found to slowly increase until VBD was reached at excess bias voltage, Ve = 11.03 V, and then rapidly increase due to avalanche multiplication. Digital circuitry to control the SPAD array and perform the necessary data processing was designed in VHDL and implemented on a FPGA chip. At room temperature, the dark count was found to be approximately 13 KHz for most of the 16 SPAD pixels and the dead time was estimated to be 40 ns.

  3. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  4. Dependently typed array programs don’t go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2009-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  5. Dependently typed array programs don't go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2008-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  6. A Spatiotemporal Indexing Approach for Efficient Processing of Big Array-Based Climate Data with MapReduce

    Science.gov (United States)

    Li, Zhenlong; Hu, Fei; Schnase, John L.; Duffy, Daniel Q.; Lee, Tsengdar; Bowen, Michael K.; Yang, Chaowei

    2016-01-01

    Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (10 speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.

  7. A Novel Self-aligned and Maskless Process for Formation of Highly Uniform Arrays of Nanoholes and Nanopillars

    Directory of Open Access Journals (Sweden)

    Wu Wei

    2008-01-01

    Full Text Available AbstractFabrication of a large area of periodic structures with deep sub-wavelength features is required in many applications such as solar cells, photonic crystals, and artificial kidneys. We present a low-cost and high-throughput process for realization of 2D arrays of deep sub-wavelength features using a self-assembled monolayer of hexagonally close packed (HCP silica and polystyrene microspheres. This method utilizes the microspheres as super-lenses to fabricate nanohole and pillar arrays over large areas on conventional positive and negative photoresist, and with a high aspect ratio. The period and diameter of the holes and pillars formed with this technique can be controlled precisely and independently. We demonstrate that the method can produce HCP arrays of hole of sub-250 nm size using a conventional photolithography system with a broadband UV source centered at 400 nm. We also present our 3D FDTD modeling, which shows a good agreement with the experimental results.

  8. Solution-Processed Wide-Bandgap Organic Semiconductor Nanostructures Arrays for Nonvolatile Organic Field-Effect Transistor Memory.

    Science.gov (United States)

    Li, Wen; Guo, Fengning; Ling, Haifeng; Liu, Hui; Yi, Mingdong; Zhang, Peng; Wang, Wenjun; Xie, Linghai; Huang, Wei

    2018-01-01

    In this paper, the development of organic field-effect transistor (OFET) memory device based on isolated and ordered nanostructures (NSs) arrays of wide-bandgap (WBG) small-molecule organic semiconductor material [2-(9-(4-(octyloxy)phenyl)-9H-fluoren-2-yl)thiophene]3 (WG 3 ) is reported. The WG 3 NSs are prepared from phase separation by spin-coating blend solutions of WG 3 /trimethylolpropane (TMP), and then introduced as charge storage elements for nonvolatile OFET memory devices. Compared to the OFET memory device with smooth WG 3 film, the device based on WG 3 NSs arrays exhibits significant improvements in memory performance including larger memory window (≈45 V), faster switching speed (≈1 s), stable retention capability (>10 4 s), and reliable switching properties. A quantitative study of the WG 3 NSs morphology reveals that enhanced memory performance is attributed to the improved charge trapping/charge-exciton annihilation efficiency induced by increased contact area between the WG 3 NSs and pentacene layer. This versatile solution-processing approach to preparing WG 3 NSs arrays as charge trapping sites allows for fabrication of high-performance nonvolatile OFET memory devices, which could be applicable to a wide range of WBG organic semiconductor materials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Nanofabrication and characterization of ZnO nanorod arrays and branched microrods by aqueous solution route and rapid thermal processing

    International Nuclear Information System (INIS)

    Lupan, Oleg; Chow, Lee; Chai, Guangyu; Roldan, Beatriz; Naitabdi, Ahmed; Schulte, Alfons; Heinrich, Helge

    2007-01-01

    This paper presents an inexpensive and fast fabrication method for one-dimensional (1D) ZnO nanorod arrays and branched two-dimensional (2D), three-dimensional (3D) - nanoarchitectures. Our synthesis technique includes the use of an aqueous solution route and post-growth rapid thermal annealing. It permits rapid and controlled growth of ZnO nanorod arrays of 1D - rods, 2D - crosses, and 3D - tetrapods without the use of templates or seeds. The obtained ZnO nanorods are uniformly distributed on the surface of Si substrates and individual or branched nano/microrods can be easily transferred to other substrates. Process parameters such as concentration, temperature and time, type of substrate and the reactor design are critical for the formation of nanorod arrays with thin diameter and transferable nanoarchitectures. X-ray diffraction, scanning electron microscopy, X-ray photoelectron spectroscopy, transmission electron microscopy and Micro-Raman spectroscopy have been used to characterize the samples

  10. High-energy heavy ion testing of VLSI devices for single event ...

    Indian Academy of Sciences (India)

    Unknown

    per describes the high-energy heavy ion radiation testing of VLSI devices for single event upset (SEU) ... The experimental set up employed to produce low flux of heavy ions viz. silicon ... through which they pass, leaving behind a wake of elec- ... for use in Bus Management Unit (BMU) and bulk CMOS ... was scheduled.

  11. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam; Ghoneim, Mohamed T.; El Boghdady, Nawal; Halawa, Sarah; Iskander, Sophinese M.; Anis, Mohab H.

    2011-01-01

    -designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result

  12. An area-efficient path memory structure for VLSI Implementation of high speed Viterbi decoders

    DEFF Research Database (Denmark)

    Paaske, Erik; Pedersen, Steen; Sparsø, Jens

    1991-01-01

    Path storage and selection methods for Viterbi decoders are investigated with special emphasis on VLSI implementations. Two well-known algorithms, the register exchange, algorithm, REA, and the trace back algorithm, TBA, are considered. The REA requires the smallest number of storage elements...

  13. VLSI top-down design based on the separation of hierarchies

    NARCIS (Netherlands)

    Spaanenburg, L.; Broekema, A.; Leenstra, J.; Huys, C.

    1986-01-01

    Despite the presence of structure, interactions between the three views on VLSI design still lead to lengthy iterations. By separating the hierarchies for the respective views, the interactions are reduced. This separated hierarchy allows top-down design with functional abstractions as exemplified

  14. Systolic pocessing and an implementation for signal and image processing

    Energy Technology Data Exchange (ETDEWEB)

    Kulkarni, A.V.; Yen, D.W.L.

    1982-10-01

    Many signal and image processing applications impose a severe demand on the I/O bandwidth and computation power of general-purpose computers. The systolic concept offers guidelines in building cost-effective systems that balance I/O with computation. The resulting simplicity and regularity of such systems leads to modular designs suitable for VLSI implementation. The authors describe a linear systolic array capable of evaluating a large class of inner-product functions used in signal and image processing. These include matrix multiplications, multidimensional convolutions using fixed or time-varying kernels, as well as various nonlinear functions of vectors. The system organization of a working prototype is also described. 11 references.

  15. A new process for fabricating nanodot arrays on selective regions with diblock copolymer thin film

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dae-Ho [Department of Materials Science and Engineering, Polymer Research Institute, Pohang University of Science and Technology, San 31, Hyoja-Dong, Nam-Gu, Pohang 790-784 (Korea, Republic of)

    2007-09-12

    A procedure for micropatterning a single layer of nanodot arrays in selective regions is demonstrated by using thin films of polystyrene-b-poly(t-butyl acrylate) (PS-b-PtBA) diblock copolymer. The thin-film self-assembled into hexagonally arranged PtBA nanodomains in a PS matrix on a substrate by solvent annealing with 1,4-dioxane. The PtBA nanodomains were converted into poly(acrylic acid) (PAA) having carboxylic-acid-functionalized nanodomains by exposure to hydrochloric acid vapor, or were removed by ultraviolet (UV) irradiation to generate vacant sites without any functional groups due to the elimination of PtBA domains. By sequential treatment with aqueous sodium bicarbonate and aqueous zinc acetate solution, zinc cations were selectively loaded only on the carboxylic-acid-functionalized nanodomains prepared via hydrolysis. Macroscopic patterning through a photomask via UV irradiation, hydrolysis, sequential zinc cation loading and calcination left a nanodot array of zinc oxide on a selectively UV-shaded region.

  16. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Choi, Yong, E-mail: ychoi.image@gmail.com [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Hong, Key Jo [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kang, Jihoon [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Jung, Jin Ho [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Huh, Youn Suk [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Lim, Hyun Keong; Kim, Sang Su [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kim, Byung-Tae [Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Chung, Yonghyun [Department of Radiological Science, Yonsei University College of Health Science, 234 Meaji, Heungup Wonju, Kangwon-Do 220-710 (Korea, Republic of)

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm) coupled 1:1 with LYSO scintillators (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm Multiplication-Sign 20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and

  17. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications

    Energy Technology Data Exchange (ETDEWEB)

    Wlodarczyk, Krystian L., E-mail: K.L.Wlodarczyk@hw.ac.uk; Maier, Robert R. J.; Hand, Duncan P. [Institute of Photonics and Quantum Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Bryce, Emma; Hutson, David; Kirk, Katherine [School of Engineering and Science, University of the West of Scotland, Paisley PA1 2BE (United Kingdom); Schwartz, Noah; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil [UK Astronomy Technology Centre, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Strachan, Mel [Institute of Photonics and Quantum Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); UK Astronomy Technology Centre, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom)

    2014-02-15

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  18. A High Performance Backend for Array-Oriented Programming on Next-Generation Processing Units

    DEFF Research Database (Denmark)

    Lund, Simon Andreas Frimann

    The financial crisis, which started in 2008, spawned the HIPERFIT research center as a preventive measure against future financial crises. The goal of prevention is to be met by improving mathematical models for finance, the verifiable description of them in domain-specific languages...... and the efficient execution of them on high performance systems. This work investigates the requirements for, and the implementation of, a high performance backend supporting these goals. This involves an outline of the hardware available today, in the near future and how to program it for high performance....... The main challenge is to bridge the gaps between performance, productivity and portability. A declarative high-level array-oriented programming model is explored to achieve this goal and a backend implemented to support it. Different strategies to the backend design and application of optimizations...

  19. Primary Dendrite Array Morphology: Observations from Ground-based and Space Station Processed Samples

    Science.gov (United States)

    Tewari, Surendra; Rajamure, Ravi; Grugel, Richard; Erdmann, Robert; Poirier, David

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, "Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST)". Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K/cm (MICAST6) and 28 K/cm (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 micron/s) and a speed decrease (MICAST7-from 20 to 10 micron/s). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  20. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  1. Driving a car with custom-designed fuzzy inferencing VLSI chips and boards

    Science.gov (United States)

    Pin, Francois G.; Watanabe, Yutaka

    1993-01-01

    Vehicle control in a-priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise, incomplete, or unreliable data. For such systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips were developed to add a fuzzy inferencing capability to real-time control systems. All inferencing rules on a chip are processed in parallel, allowing execution of the entire rule base in about 30 microseconds, and therefore, making control of 'reflex-type' of motions envisionable. The use of these boards and the approach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation in a-priori unknown environments are first discussed. Then how the human-like navigation scheme implemented on one of the qualitative inferencing boards was installed on a test-bed platform to investigate two control modes for driving a car in a-priori unknown environments on the basis of sparse and imprecise sensor data is described. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulation results as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and some human

  2. Design Implementation and Testing of a VLSI High Performance ASIC for Extracting the Phase of a Complex Signal

    National Research Council Canada - National Science Library

    Altmeyer, Ronald

    2002-01-01

    This thesis documents the research, circuit design, and simulation testing of a VLSI ASIC which extracts phase angle information from a complex sampled signal using the arctangent relationship: (phi=tan/-1 (Q/1...

  3. Neuromorphic VLSI vision system for real-time texture segregation.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2008-10-01

    The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.

  4. Double-sided anodic titania nanotube arrays: a lopsided growth process.

    Science.gov (United States)

    Sun, Lidong; Zhang, Sam; Sun, Xiao Wei; Wang, Xiaoyan; Cai, Yanli

    2010-12-07

    In the past decade, the pore diameter of anodic titania nanotubes was reported to be influenced by a number of factors in organic electrolyte, for example, applied potential, working distance, water content, and temperature. All these were closely related to potential drop in the organic electrolyte. In this work, the essential role of electric field originating from the potential drop was directly revealed for the first time using a simple two-electrode anodizing method. Anodic titania nanotube arrays were grown simultaneously at both sides of a titanium foil, with tube length being longer at the front side than that at the back side. This lopsided growth was attributed to the higher ionic flux induced by electric field at the front side. Accordingly, the nanotube length was further tailored to be comparable at both sides by modulating the electric field. These results are promising to be used in parallel configuration dye-sensitized solar cells, water splitting, and gas sensors, as a result of high surface area produced by the double-sided architecture.

  5. Single event upset susceptibilities of latchup immune CMOS process programmable gate arrays

    Science.gov (United States)

    Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.; Tsubota, T. K.

    Single event upsets (SEU) and latchup susceptibilities of complementary metal oxide semiconductor programmable gate arrays (CMOS PPGA's) were measured at the Lawrence Berkeley Laboratory 88-in. cyclotron facility with Xe (603 MeV), Cu (290 MeV), and Ar (180 MeV) ion beams. The PPGA devices tested were those which may be used in space. Most of the SEU measurements were taken with a newly constructed tester called the Bus Access Storage and Comparison System (BASACS) operating via a Macintosh II computer. When BASACS finds that an output does not match a prerecorded pattern, the state of all outputs, position in the test cycle, and other necessary information is transmitted and stored in the Macintosh. The upset rate was kept between 1 and 3 per second. After a sufficient number of errors are stored, the test is stopped and the total fluence of particles and total errors are recorded. The device power supply current was closely monitored to check for occurrence of latchup. Results of the tests are presented, indicating that some of the PPGA's are good candidates for selected space applications.

  6. High-resolution focal plane array IR detection modules and digital signal processing technologies at AIM

    Science.gov (United States)

    Cabanski, Wolfgang A.; Breiter, Rainer; Koch, R.; Mauk, Karl-Heinz; Rode, Werner; Ziegler, Johann; Eberhardt, Kurt; Oelmaier, Reinhard; Schneider, Harald; Walther, Martin

    2000-07-01

    Full video format focal plane array (FPA) modules with up to 640 X 512 pixels have been developed for high resolution imaging applications in either mercury cadmium telluride (MCT) mid wave (MWIR) infrared (IR) or platinum silicide (PtSi) and quantum well infrared photodetector (QWIP) technology as low cost alternatives to MCT for high performance IR imaging in the MWIR or long wave spectral band (LWIR). For the QWIP's, a new photovoltaic technology was introduced for improved NETD performance and higher dynamic range. MCT units provide fast frame rates > 100 Hz together with state of the art thermal resolution NETD hardware platforms and software for image visualization and nonuniformity correction including scene based self learning algorithms had to be developed to accomplish for the high data rates of up to 18 M pixels/s with 14-bit deep data, allowing to take into account nonlinear effects to access the full NETD by accurate reduction of residual fixed pattern noise. The main features of these modules are summarized together with measured performance data for long range detection systems with moderately fast to slow F-numbers like F/2.0 - F/3.5. An outlook shows most recent activities at AIM, heading for multicolor and faster frame rate detector modules based on MCT devices.

  7. Optical technology for microwave applications VI and optoelectronic signal processing for phased-array antennas III; Proceedings of the Meeting, Orlando, FL, Apr. 20-23, 1992

    Science.gov (United States)

    Yao, Shi-Kay; Hendrickson, Brian M.

    The following topics related to optical technology for microwave applications are discussed: advanced acoustooptic devices, signal processing device technologies, optical signal processor technologies, microwave and optomicrowave devices, advanced lasers and sources, wideband electrooptic modulators, and wideband optical communications. The topics considered in the discussion of optoelectronic signal processing for phased-array antennas include devices, signal processing, and antenna systems.

  8. Proceedings of the Adaptive Sensor Array Processing (ASAP) Workshop 12-14 March 1997. Volume 1

    National Research Council Canada - National Science Library

    O'Donovan, G

    1997-01-01

    ... was included in the first and third ASAP workshops, ASAP has traditionally concentrated on radar core topics include airborne radar testbed systems, space time adaptive processing, multipath jamming...

  9. Hydrogen Detection With a Gas Sensor ArrayProcessing and Recognition of Dynamic Responses Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Gwiżdż Patryk

    2015-03-01

    Full Text Available An array consisting of four commercial gas sensors with target specifications for hydrocarbons, ammonia, alcohol, explosive gases has been constructed and tested. The sensors in the array operate in the dynamic mode upon the temperature modulation from 350°C to 500°C. Changes in the sensor operating temperature lead to distinct resistance responses affected by the gas type, its concentration and the humidity level. The measurements are performed upon various hydrogen (17-3000 ppm, methane (167-3000 ppm and propane (167-3000 ppm concentrations at relative humidity levels of 0-75%RH. The measured dynamic response signals are further processed with the Discrete Fourier Transform. Absolute values of the dc component and the first five harmonics of each sensor are analysed by a feed-forward back-propagation neural network. The ultimate aim of this research is to achieve a reliable hydrogen detection despite an interference of the humidity and residual gases.

  10. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  11. Physical features of the wire-array Z-pinch plasmas imploding process

    International Nuclear Information System (INIS)

    Gao Chunming; Feng Kaiming

    2001-01-01

    In the process of research on controlled fusion reactors, scientists found that the Z-pinch plasma can produce very strong X-rays, comparing with other X-ray sources. In researching the process of imploding, the snowplow model and Haines model are introduced and proved. About amassing X-rays, several ways of discharging X-rays are carefully analyzed and the relative theories are proved. In doing simulations, the one dimension model is used in writing codes, the match relationships are calculated and the process of imploding is also simulated. Some useful and reasonable results are obtained

  12. A Module Experimental Process System Development Unit (MEPSDU). [flat plate solar arrays

    Science.gov (United States)

    1981-01-01

    The development of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which meet the price goal in 1986 of 70 cents or less per Watt peak is described. The major accomplishments include (1) an improved AR coating technique; (2) the use of sand blast back clean-up to reduce clean up costs and to allow much of the Al paste to serve as a back conductor; and (3) the development of wave soldering for use with solar cells. Cells were processed to evaluate different process steps, a cell and minimodule test plan was prepared and data were collected for preliminary Samics cost analysis.

  13. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    Science.gov (United States)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs

  14. COSMIC INFRARED BACKGROUND FLUCTUATIONS IN DEEP SPITZER INFRARED ARRAY CAMERA IMAGES: DATA PROCESSING AND ANALYSIS

    International Nuclear Information System (INIS)

    Arendt, Richard G.; Kashlinsky, A.; Moseley, S. H.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale (∼>30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the ∼1-5 μm mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low (∼>1 nW m -2 sr -1 at 3-5 μm), and thus consistent with current γ-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs inhabited by the populations producing these

  15. Image processing system design for microcantilever-based optical readout infrared arrays

    Science.gov (United States)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  16. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  17. Numerical microstructural analysis of automotive-grade steels when joined with an array of welding processes

    International Nuclear Information System (INIS)

    Gould, J.E.; Khurana, S.P.; Li, T.

    2004-01-01

    Weld strength, formability, and impact resistance for joints on automotive steels is dependent on the underlying microstructure. A martensitic weld area is often a precursor to reduced mechanical performance. In this paper, efforts are made to predict underlying joint microstructures for a range of processing approaches, steel types, and gauges. This was done first by calculating cooling rates for some typical automotive processes [resistance spot welding (RSW), resistance mash seam welding (RMSEW), laser beam welding (LBW), and gas metal arc welding (GMAW)]. Then, critical cooling rates for martensite formation were calculated for a range of automotive steels using an available thermodynamically based phase transformation model. These were then used to define combinations of process type, steel type, and gauge where welds could be formed avoiding martensite in the weld area microstructure

  18. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  19. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam

    2011-12-01

    Power dissipation poses a great challenge for VLSI designers. With the intense down-scaling of technology, the total power consumption of the chip is made up primarily of leakage power dissipation. This paper proposes combining a custom-designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result of implementing this novel power gating technique, a standby leakage power reduction of 99% and energy savings of 33.3% are achieved. Finally the possible effects of surge currents and ground bounce noise are studied. These findings allow longer operation times for battery-operated systems characterized by long standby periods. © 2011 IEEE.

  20. Towards an Analogue Neuromorphic VLSI Instrument for the Sensing of Complex Odours

    Science.gov (United States)

    Ab Aziz, Muhammad Fazli; Harun, Fauzan Khairi Che; Covington, James A.; Gardner, Julian W.

    2011-09-01

    Almost all electronic nose instruments reported today employ pattern recognition algorithms written in software and run on digital processors, e.g. micro-processors, microcontrollers or FPGAs. Conversely, in this paper we describe the analogue VLSI implementation of an electronic nose through the design of a neuromorphic olfactory chip. The modelling, design and fabrication of the chip have already been reported. Here a smart interface has been designed and characterised for thisneuromorphic chip. Thus we can demonstrate the functionality of the a VLSI neuromorphic chip, producing differing principal neuron firing patterns to real sensor response data. Further work is directed towards integrating 9 separate neuromorphic chips to create a large neuronal network to solve more complex olfactory problems.

  1. Flat-plate solar array project process development area process research of non-CZ silicon material

    Science.gov (United States)

    1985-01-01

    Three sets of samples were laser processed and then cell processed. The laser processing was carried out on P-type and N-type web at laser power levels from 0.5 joule/sq cm to 2.5 joule/sq cm. Six different liquid dopants were tested (3 phosphorus dopants, 2 boron dopants, 1 aluminum dopant). The laser processed web strips were fabricated into solar cells immediately after laser processing and after various annealing cycles. Spreading resistance measurements made on a number of these samples indicate that the N(+)P (phosphorus doped) junction is approx. 0.2 micrometers deep and suitable for solar cells. However, the P(+)N (or P(+)P) junction is very shallow ( 0.1 micrometers) with a low surface concentration and resulting high resistance. Due to this effect, the fabricated cells are of low efficiency. The maximum efficiency attained was 9.6% on P-type web after a 700 C anneal. The main reason for the low efficiency was a high series resistance in the cell due to a high resistance back contact.

  2. Optimizing laser beam profiles using micro-lens arrays for efficient material processing: applications to solar cells

    Science.gov (United States)

    Hauschild, Dirk; Homburg, Oliver; Mitra, Thomas; Ivanenko, Mikhail; Jarczynski, Manfred; Meinschien, Jens; Bayer, Andreas; Lissotschenko, Vitalij

    2009-02-01

    High power laser sources are used in various production tools for microelectronic products and solar cells, including the applications annealing, lithography, edge isolation as well as dicing and patterning. Besides the right choice of the laser source suitable high performance optics for generating the appropriate beam profile and intensity distribution are of high importance for the right processing speed, quality and yield. For industrial applications equally important is an adequate understanding of the physics of the light-matter interaction behind the process. In advance simulations of the tool performance can minimize technical and financial risk as well as lead times for prototyping and introduction into series production. LIMO has developed its own software founded on the Maxwell equations taking into account all important physical aspects of the laser based process: the light source, the beam shaping optical system and the light-matter interaction. Based on this knowledge together with a unique free-form micro-lens array production technology and patented micro-optics beam shaping designs a number of novel solar cell production tool sub-systems have been built. The basic functionalities, design principles and performance results are presented with a special emphasis on resilience, cost reduction and process reliability.

  3. ZnO nanorods arrays with Ag nanoparticles on the (002) plane derived by liquid epitaxy growth and electrodeposition process

    International Nuclear Information System (INIS)

    Yin Xingtian; Que Wenxiu; Shen Fengyu

    2011-01-01

    Well-aligned ZnO nanorods (NRs) arrays with Ag nanoparticles (NPs) on the (002) plane are obtained by combining a liquid epitaxy technique with an electrodeposition process. Cyclic voltammetry study is employed to understand the electrochemical behaviors of the electrodeposition system, and potentiostatic method is employed to deposit silver NPs on the ZnO NRs in the electrolyte with an Ag + concentration of 1 mM. X-ray diffraction analysis is used to study the crystalline properties of the as-prepared samples, and energy dispersive X-ray is adopted to confirm the composition at the surface of the deposited samples. Results indicate only a small quantity of silver can be deposited on the surface of the samples. Effect of the deposition potential and time on the morphological properties of the resultant Ag NPs/ZnO NRs are investigated in detail. Scanning electron microscopy images and transmission electron microscopy images indicate that the Ag NPs deposited on the (002) plane of the ZnO NRs with a large dispersion in diameter can be obtained by a single potentiostatic deposition process, while dense Ag NPs with a much smaller diameter dispersion on the top of the ZnO NRs, most of which locate on the conical tip of the ZnO NRs, can be obtained by a two-potentiostatic deposition process, The mechanism of this deposition process is also suggested.

  4. An analog VLSI real time optical character recognition system based on a neural architecture

    International Nuclear Information System (INIS)

    Bo, G.; Caviglia, D.; Valle, M.

    1999-01-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system

  5. An analog VLSI real time optical character recognition system based on a neural architecture

    Energy Technology Data Exchange (ETDEWEB)

    Bo, G.; Caviglia, D.; Valle, M. [Genoa Univ. (Italy). Dip. of Biophysical and Electronic Engineering

    1999-03-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system.

  6. First results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Anzivino, G.; Horisberger, R.; Hubbeling, L.; Hyams, B.; Parker, S.; Breakstone, A.; Litke, A.M.; Walker, J.T.; Bingefors, N.

    1986-01-01

    A 256-strip silicon detector with 25 μm strip pitch, connected to two 128-channel NMOS VLSI chips (Microplex), has been tested using straight-through tracks from a ruthenium beta source. The readout channels have a pitch of 47.5 μm. A single multiplexed output provides voltages proportional to the integrated charge from each strip. The most probable signal height from the beta traversals is approximately 14 times the rms noise in any single channel. (orig.)

  7. International Conference on VLSI, Communication, Advanced Devices, Signals & Systems and Networking

    CERN Document Server

    Shirur, Yasha; Prasad, Rekha

    2013-01-01

    This book is a collection of papers presented by renowned researchers, keynote speakers and academicians in the International Conference on VLSI, Communication, Analog Designs, Signals and Systems, and Networking (VCASAN-2013), organized by B.N.M. Institute of Technology, Bangalore, India during July 17-19, 2013. The book provides global trends in cutting-edge technologies in electronics and communication engineering. The content of the book is useful to engineers, researchers and academicians as well as industry professionals.

  8. A VLSI-Based High-Performance Raster Image System.

    Science.gov (United States)

    1986-05-08

    and data in broadcast form to the array of memory -hips in the frame buffer, shown in the bottom block. This is simply a physical structure to hold up...Principal Investigator: John Poulton Collaboration on algorithm development: Prof. Jack Goldfeather (Dept. of Mathematics, Carleton Collge ...1983) Cheng-Hong Hsieh (MS, Computer Science, May, 1985) Jeff P. Hultquist Susan Spach Undergraduate ResearLh Assistant: Sonya Holder (BS, Physics , May

  9. Development of a Process for a High Capacity Arc Heater Production of Silicon for Solar Arrays

    Science.gov (United States)

    Reed, W. H.

    1979-01-01

    A program was established to develop a high temperature silicon production process using existing electric arc heater technology. Silicon tetrachloride and a reductant (sodium) are injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction is expected to occur and proceed essentially to completion, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection were developed. Included in this report are: test system preparation; testing; injection techniques; kinetics; reaction demonstration; conclusions; and the project status.

  10. Low cost silicon solar array project large area silicon sheet task: Silicon web process development

    Science.gov (United States)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.

  11. Low cost solar array project production process and equipment task: A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Several major modifications were made to the design presented at the PDR. The frame was deleted in favor of a "frameless" design which will provide a substantially improved cell packing factor. Potential shaded cell damage resulting from operation into a short circuit can be eliminated by a change in the cell series/parallel electrical interconnect configuration. The baseline process sequence defined for the MEPSON was refined and equipment design and specification work was completed. SAMICS cost analysis work accelerated, format A's were prepared and computer simulations completed. Design work on the automated cell interconnect station was focused on bond technique selection experiments.

  12. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-01

    to develop novel co-prime sampling and array design strategies that achieve high-resolution estimation of spectral power distributions and signal...by the array geometry and the frequency offset. We overcome this limitation by introducing a novel sparsity-based multi-target localization approach...estimation using a sparse uniform linear array with two CW signals of co-prime frequencies,” IEEE International Workshop on Computational Advances

  13. Development of a process for high capacity arc heater production of silicon for solar arrays

    Science.gov (United States)

    Meyer, T. N.

    1980-01-01

    A high temperature silicon production process using existing electric arc heater technology is discussed. Silicon tetrachloride and a reductant, liquid sodium, were injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction occurred, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection of the molten silicon were developed. The desired degree of separation was not achieved. The electrical, control and instrumentation, cooling water, gas, SiCl4, and sodium systems are discussed. The plasma reactor, silicon collection, effluent disposal, the gas burnoff stack, and decontamination and safety are also discussed. Procedure manuals, shakedown testing, data acquisition and analysis, product characterization, disassembly and decontamination, and component evaluation are reviewed.

  14. Site Effect Assessment of Earthquake Ground Motion Based on Advanced Data Processing of Microtremor Array Measurements

    Science.gov (United States)

    Liu, L.; He, K.; Mehl, R.; Wang, W.; Chen, Q.

    2008-12-01

    High-resolution near-surface geologic information is essential for earthquake ground motion prediction. The near-surface geology forms the critical constituent to influence seismic wave propagation, which is known as the local site effects. We have collected microtremor data over 1000 sites in Beijing area for extracting the much needed earthquake engineering parameters (primarily sediment thickness, with the shear wave velocity profiling at a few important control points) in this heavily populated urban area. Advanced data processing algorithms are employed in various stages in assessing the local site effect on earthquake ground motion. First, we used the empirical mode decomposition (EMD), also known as the Hilbert-Huang transform (HHT), to enhance the microtremor data analysis by excluding the local transients and continuous monochromic industrial noises. With this enhancement we have significantly increased the number of data points to be useful in delineating sediment thickness in this area. Second, we have used the cross-correlation of microtremor data acquired for the pairs of two adjacent sites to generate a 'pseudo-reflection' record, which can be treated as the Green function of the 1D layered earth model at the site. The sediment thickness information obtained this way is also consistent with the results obtained by the horizontal to vertical spectral ratio method (HVSR). For most sites in this area, we can achieve 'self consistent' results among different processing skechems regarding to the sediment thickness - the fundamental information to be used in assessing the local site effect. Finally, the pseudo-spectral time domain method was used to simulate the seismic wave propagation caused by a scenario earthquake in this area - the 1679 M8 Sanhe-pinggu earthquake. The characteristics of the simulated earthquake ground motion have found a general correlation with the thickness of the sediments in this area. And more importantly, it is also in agreement

  15. Flat-plate solar array project process development area: Process research of non-CZ silicon material

    Science.gov (United States)

    Campbell, R. B.

    1986-01-01

    Several different techniques to simultaneously diffuse the front and back junctions in dendritic web silicon were investigated. A successful simultaneous diffusion reduces the cost of the solar cell by reducing the number of processing steps, the amount of capital equipment, and the labor cost. The three techniques studied were: (1) simultaneous diffusion at standard temperatures and times using a tube type diffusion furnace or a belt furnace; (2) diffusion using excimer laser drive-in; and (3) simultaneous diffusion at high temperature and short times using a pulse of high intensity light as the heat source. The use of an excimer laser and high temperature short time diffusion experiment were both more successful than the diffusion at standard temperature and times. The three techniques are described in detail and a cost analysis of the more successful techniques is provided.

  16. A dual-directional light-control film with a high-sag and high-asymmetrical-shape microlens array fabricated by a UV imprinting process

    International Nuclear Information System (INIS)

    Lin, Ta-Wei; Liao, Yunn-Shiuan; Chen, Chi-Feng; Yang, Jauh-Jung

    2008-01-01

    A dual-directional light-control film with a high-sag and high-asymmetric-shape long gapless hexagonal microlens array fabricated by an ultra-violent (UV) imprinting process is presented. Such a lens array is designed by ray-tracing simulation and fabricated by a micro-replication process including gray-scale lithography, electroplating process and UV curing. The shape of the designed lens array is similar to that of a near half-cylindrical lens array with a periodical ripple. The measurement results of a prototype show that the incident lights using a collimated LED with the FWHM of dispersion angle, 12°, are diversified differently in short and long axes. The numerical and experimental results show that the FWHMs of the view angle for angular brightness in long and short axis directions through the long hexagonal lens are about 34.3° and 18.1° and 31° and 13°, respectively. Compared with the simulation result, the errors in long and short axes are about 5% and 16%, respectively. Obviously, the asymmetric gapless microlens array can realize the aim of the controlled asymmetric angular brightness. Such a light-control film can be used as a power saving screen compared with convention diffusing film for the application of a rear projection display

  17. Process Research On Polycrystalline Silicon Material (PROPSM). [flat plate solar array project

    Science.gov (United States)

    Culik, J. S.

    1983-01-01

    The performance-limiting mechanisms in large-grain (greater than 1 to 2 mm in diameter) polycrystalline silicon solar cells were investigated by fabricating a matrix of 4 sq cm solar cells of various thickness from 10 cm x 10 cm polycrystalline silicon wafers of several bulk resistivities. Analysis of the illuminated I-V characteristics of these cells suggests that bulk recombination is the dominant factor limiting the short-circuit current. The average open-circuit voltage of the polycrystalline solar cells is 30 to 70 mV lower than that of co-processed single-crystal cells; the fill-factor is comparable. Both open-circuit voltage and fill-factor of the polycrystalline cells have substantial scatter that is not related to either thickness or resistivity. This implies that these characteristics are sensitive to an additional mechanism that is probably spatial in nature. A damage-gettering heat-treatment improved the minority-carrier diffusion length in low lifetime polycrystalline silicon, however, extended high temperature heat-treatment degraded the lifetime.

  18. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    Science.gov (United States)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  19. Demonstration of array eddy current technology for real-time monitoring of laser powder bed fusion additive manufacturing process

    Science.gov (United States)

    Todorov, Evgueni; Boulware, Paul; Gaah, Kingsley

    2018-03-01

    Nondestructive evaluation (NDE) at various fabrication stages is required to assure quality of feedstock and solid builds. Industry efforts are shifting towards solutions that can provide real-time monitoring of additive manufacturing (AM) fabrication process layer-by-layer while the component is being built to reduce or eliminate dependence on post-process inspection. Array eddy current (AEC), electromagnetic NDE technique was developed and implemented to directly scan the component without physical contact with the powder and fused layer surfaces at elevated temperatures inside a LPBF chamber. The technique can detect discontinuities, surface irregularities, and undesirable metallurgical phase transformations in magnetic and nonmagnetic conductive materials used for laser fusion. The AEC hardware and software were integrated with the L-PBF test bed. Two layer-by-layer tests of Inconel 625 coupons with AM built discontinuities and lack of fusion were conducted inside the L-PBF chamber. The AEC technology demonstrated excellent sensitivity to seeded, natural surface, and near-surface-embedded discontinuities, while also detecting surface topography. The data was acquired and imaged in a layer-by-layer sequence demonstrating the real-time monitoring capabilities of this new technology.

  20. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    Science.gov (United States)

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  1. Mixed-Dimensionality VLSI-Type Configurable Tools for Virtual Prototyping of Biomicrofluidic Devices and Integrated Systems

    Science.gov (United States)

    Makhijani, Vinod B.; Przekwas, Andrzej J.

    2002-10-01

    This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).

  2. Data processing

    International Nuclear Information System (INIS)

    Cousot, P.

    1988-01-01

    The 1988 progress report of the Data Processing laboratory (Polytechnic School, France), is presented. The laboratory research fields are: the semantics, the tests and the semantic analysis of the codes, the formal calculus, the software applications, the algorithms, the neuron networks and VLSI (Very Large Scale Integration). The investigations concerning the polynomial rings are performed by means of the standard basis approach. Among the research topics, the Pascal codes, the parallel processing, the combinatorial, statistical and asymptotic properties of the fundamental data processing tools, the signal processing and the pattern recognition. The published papers, the congress communications and the thesis are also included [fr

  3. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  4. Initial beam test results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Adolphsen, C.; Litke, A.; Schwarz, A.

    1986-01-01

    Silicon detectors with 256 strips, having a pitch of 25 μm, and connected to two 128 channel NMOS VLSI chips each (Microplex), have been tested in relativistic charged particle beams at CERN and at the Stanford Linear Accelerator Center. The readout chips have an input channel pitch of 47.5 μm and a single multiplexed output which provides voltages proportional to the integrated charge from each strip. The most probable signal height from minimum ionizing tracks was 15 times the rms noise in any single channel. Two-track traversals with a separation of 100 μm were cleanly resolved

  5. Vlsi implementation of flexible architecture for decision tree classification in data mining

    Science.gov (United States)

    Sharma, K. Venkatesh; Shewandagn, Behailu; Bhukya, Shankar Nayak

    2017-07-01

    The Data mining algorithms have become vital to researchers in science, engineering, medicine, business, search and security domains. In recent years, there has been a terrific raise in the size of the data being collected and analyzed. Classification is the main difficulty faced in data mining. In a number of the solutions developed for this problem, most accepted one is Decision Tree Classification (DTC) that gives high precision while handling very large amount of data. This paper presents VLSI implementation of flexible architecture for Decision Tree classification in data mining using c4.5 algorithm.

  6. Techniques for Computing the DFT Using the Residue Fermat Number Systems and VLSI

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.

    1985-01-01

    The integer complex multiplier and adder over the direct sum of two copies of a finite field is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed for the DFT can be reduced substantially over the previous approach. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  7. Evolution of the MOS transistor - From conception to VLSI

    International Nuclear Information System (INIS)

    Sah, C.T.

    1988-01-01

    Historical developments of the metal-oxide-semiconductor field-effect-transistor (MOSFET) during the last sixty years are reviewed, from the 1928 patent disclosures of the field-effect conductivity modulation concept and the semiconductor triodes structures proposed by Lilienfeld to the 1947 Shockley-originated efforts which led to the laboratory demonstration of the modern silicon MOSFET thirty years later in 1960. A survey is then made of the milestones of the past thirty years leading to the latest submicron silicon logic CMOS (Complementary MOS) and BICMOS (Bipolar-Junction-Transistor CMOS combined) arrays and the three-dimensional and ferroelectric extensions of Dennard's one-transistor dynamic random access memory (DRAM) cell. Status of the submicron lithographic technologies (deep ultra-violet light, X-ray, electron-beam) are summarized. Future trends of memory cell density and logic gate speed are projected. Comparisons of the switching speed of the silicon MOSFET with that of silicon bipolar and GaAs field-effect transistors are reviewed. Use of high-temperature superconducting wires and GaAs-on-Si monolithic semiconductor optical clocks to break the interconnect-wiring delay barrier is discussed. Further needs in basic research and mathematical modeling on the failure mechanisms in submicron silicon transistors at high electric fields (hot electron effects) and in interconnection conductors at high current densities and low as well as high electric fields (electromigration) are indicated

  8. Radar techniques using array antennas

    CERN Document Server

    Wirth, Wulf-Dieter

    2013-01-01

    Radar Techniques Using Array Antennas is a thorough introduction to the possibilities of radar technology based on electronic steerable and active array antennas. Topics covered include array signal processing, array calibration, adaptive digital beamforming, adaptive monopulse, superresolution, pulse compression, sequential detection, target detection with long pulse series, space-time adaptive processing (STAP), moving target detection using synthetic aperture radar (SAR), target imaging, energy management and system parameter relations. The discussed methods are confirmed by simulation stud

  9. FFT Based VLSI Digital One Bit Electronic Warfare Receiver

    National Research Council Canada - National Science Library

    Chien-In, Henry

    1998-01-01

    ... (1 GHz) digital receiver designed for electronic warfare applications. The receiver can process two simultaneous signals and has the potential for fabrication on a single multi-chip module (MCM...

  10. Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-06-01

    Full Text Available Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational resources on selected FPGA development platform. This is because many other complex algorithms of an automated video surveillance system also run on the same platform. Keeping this key requirement as main focus, a memory efficient VLSI architecture for real-time motion detection and its implementation on FPGA platform is presented in this paper. This is accomplished by proposing a new memory efficient motion detection scheme and designing its VLSI architecture. The complete real-time motion detection system using the proposed memory efficient architecture along with proper input/output interfaces is implemented on Xilinx ML510 (Virtex-5 FX130T FPGA development platform and is capable of operating at 154.55 MHz clock frequency. Memory requirement of the proposed architecture is reduced by 41% compared to the standard clustering based motion detection architecture. The new memory efficient system robustly and automatically detects motion in real-world scenarios (both for the static backgrounds and the pseudo-stationary backgrounds in real-time for standard PAL (720 × 576 size color video.

  11. A multichip aVLSI system emulating orientation selectivity of primary visual cortical cells.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2005-07-01

    In this paper, we designed and fabricated a multichip neuromorphic analog very large scale integrated (aVLSI) system, which emulates the orientation selective response of the simple cell in the primary visual cortex. The system consists of a silicon retina and an orientation chip. An image, which is filtered by a concentric center-surround (CS) antagonistic receptive field of the silicon retina, is transferred to the orientation chip. The image transfer from the silicon retina to the orientation chip is carried out with analog signals. The orientation chip selectively aggregates multiple pixels of the silicon retina, mimicking the feedforward model proposed by Hubel and Wiesel. The chip provides the orientation-selective (OS) outputs which are tuned to 0 degrees, 60 degrees, and 120 degrees. The feed-forward aggregation reduces the fixed pattern noise that is due to the mismatch of the transistors in the orientation chip. The spatial properties of the orientation selective response were examined in terms of the adjustable parameters of the chip, i.e., the number of aggregated pixels and size of the receptive field of the silicon retina. The multichip aVLSI architecture used in the present study can be applied to implement higher order cells such as the complex cell of the primary visual cortex.

  12. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    International Nuclear Information System (INIS)

    Jian Haifang; Shi Yin

    2009-01-01

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  13. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    Energy Technology Data Exchange (ETDEWEB)

    Jian Haifang; Shi Yin, E-mail: jhf@semi.ac.c [Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083 (China)

    2009-07-15

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  14. A multi coding technique to reduce transition activity in VLSI circuits

    International Nuclear Information System (INIS)

    Vithyalakshmi, N.; Rajaram, M.

    2014-01-01

    Advances in VLSI technology have enabled the implementation of complex digital circuits in a single chip, reducing system size and power consumption. In deep submicron low power CMOS VLSI design, the main cause of energy dissipation is charging and discharging of internal node capacitances due to transition activity. Transition activity is one of the major factors that also affect the dynamic power dissipation. This paper proposes power reduction analyzed through algorithm and logic circuit levels. In algorithm level the key aspect of reducing power dissipation is by minimizing transition activity and is achieved by introducing a data coding technique. So a novel multi coding technique is introduced to improve the efficiency of transition activity up to 52.3% on the bus lines, which will automatically reduce the dynamic power dissipation. In addition, 1 bit full adders are introduced in the Hamming distance estimator block, which reduces the device count. This coding method is implemented using Verilog HDL. The overall performance is analyzed by using Modelsim and Xilinx Tools. In total 38.2% power saving capability is achieved compared to other existing methods. (semiconductor technology)

  15. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  16. Supercomputers and parallel computation. Based on the proceedings of a workshop on progress in the use of vector and array processors organised by the Institute of Mathematics and its Applications and held in Bristol, 2-3 September 1982

    International Nuclear Information System (INIS)

    Paddon, D.J.

    1984-01-01

    This book is based on the proceedings of a conference on parallel computing held in 1982. There are 18 papers which cover the following topics: VLSI parallel architectures, the theory of parallel computing and vector and array processor computing. One paper on 'Tough Problems in Reactor Design' is indexed separately. All the contributions are on research done in the United Kingdom. Although much of the experience in array processor computing is associated with the ICL distributed array processor (DAP) and this is reflected in the contributions, the research relating to the ICL DAP is relevant to all types of array processors. (UK)

  17. Handbook of VLSI microlithography principles, technology and applications

    CERN Document Server

    Glendinning, William B

    1991-01-01

    This handbook gives readers a close look at the entire technology of printing very high resolution and high density integrated circuit (IC) patterns into thin resist process transfer coatings-- including optical lithography, electron beam, ion beam, and x-ray lithography. The book's main theme is the special printing process needed to achieve volume high density IC chip production, especially in the Dynamic Random Access Memory (DRAM) industry. The book leads off with a comparison of various lithography methods, covering the three major patterning parameters of line/space, resolution, line e

  18. Applying the sequential neural-network approximation and orthogonal array algorithm to optimize the axial-flow cooling system for rapid thermal processes

    International Nuclear Information System (INIS)

    Hung, Shih-Yu; Shen, Ming-Ho; Chang, Ying-Pin

    2009-01-01

    The sequential neural-network approximation and orthogonal array (SNAOA) were used to shorten the cooling time for the rapid cooling process such that the normalized maximum resolved stress in silicon wafer was always below one in this study. An orthogonal array was first conducted to obtain the initial solution set. The initial solution set was treated as the initial training sample. Next, a back-propagation sequential neural network was trained to simulate the feasible domain to obtain the optimal parameter setting. The size of the training sample was greatly reduced due to the use of the orthogonal array. In addition, a restart strategy was also incorporated into the SNAOA so that the searching process may have a better opportunity to reach a near global optimum. In this work, we considered three different cooling control schemes during the rapid thermal process: (1) downward axial gas flow cooling scheme; (2) upward axial gas flow cooling scheme; (3) dual axial gas flow cooling scheme. Based on the maximum shear stress failure criterion, the other control factors such as flow rate, inlet diameter, outlet width, chamber height and chamber diameter were also examined with respect to cooling time. The results showed that the cooling time could be significantly reduced using the SNAOA approach

  19. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    Science.gov (United States)

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights

  20. VLSI Design of a Variable-Length FFT/IFFT Processor for OFDM-Based Communication Systems

    Directory of Open Access Journals (Sweden)

    Jen-Chih Kuo

    2003-12-01

    Full Text Available The technique of {orthogonal frequency division multiplexing (OFDM} is famous for its robustness against frequency-selective fading channel. This technique has been widely used in many wired and wireless communication systems. In general, the {fast Fourier transform (FFT} and {inverse FFT (IFFT} operations are used as the modulation/demodulation kernel in the OFDM systems, and the sizes of FFT/IFFT operations are varied in different applications of OFDM systems. In this paper, we design and implement a variable-length prototype FFT/IFFT processor to cover different specifications of OFDM applications. The cached-memory FFT architecture is our suggested VLSI system architecture to design the prototype FFT/IFFT processor for the consideration of low-power consumption. We also implement the twiddle factor butterfly {processing element (PE} based on the {{coordinate} rotation digital computer (CORDIC} algorithm, which avoids the use of conventional multiplication-and-accumulation unit, but evaluates the trigonometric functions using only add-and-shift operations. Finally, we implement a variable-length prototype FFT/IFFT processor with TSMC 0.35 μm 1P4M CMOS technology. The simulations results show that the chip can perform (64-2048-point FFT/IFFT operations up to 80 MHz operating frequency which can meet the speed requirement of most OFDM standards such as WLAN, ADSL, VDSL (256∼2K, DAB, and 2K-mode DVB.

  1. Optimization of processing parameters on the controlled growth of c-axis oriented ZnO nanorod arrays

    Energy Technology Data Exchange (ETDEWEB)

    Malek, M. F., E-mail: mfmalek07@gmail.com; Rusop, M., E-mail: rusop@salam.uitm.my [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); NANO-SciTech Centre (NST), Institute of Science (IOS), Universiti Teknologi MARA - UiTM, 40450 Shah Alam, Selangor (Malaysia); Mamat, M. H., E-mail: hafiz-030@yahoo.com [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); Musa, M. Z., E-mail: musa948@gmail.com [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM) Pulau Pinang, Jalan Permatang Pauh, 13500 Permatang Pauh, Pulau Pinang (Malaysia); Saurdi, I., E-mail: saurdy788@gmail.com; Ishak, A., E-mail: ishak@sarawak.uitm.edu.my [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM) Sarawak, Kampus Kota Samarahan, Jalan Meranek, 94300 Kota Samarahan, Sarawak (Malaysia); Alrokayan, Salman A. H., E-mail: dr.salman@alrokayan.com; Khan, Haseeb A., E-mail: khan-haseeb@yahoo.com [Chair of Targeting and Treatment of Cancer Using Nanoparticles, Deanship of Scientific Research, King Saud University (KSU), Riyadh 11451 (Saudi Arabia)

    2016-07-06

    Optimization of the growth time parameter was conducted to synthesize high-quality c-axis ZnO nanorod arrays. The effects of the parameter on the crystal growth and properties were systematically investigated. Our studies confirmed that the growth time influence the properties of ZnO nanorods where the crystallite size of the structures was increased at higher deposition time. Field emission scanning electron microsope analysis confirmed the morphologies structure of the ZnO nanorods. The ZnO nanostructures prepared under the optimized growth conditions showed an intense XRD peak which reveal a higher c-axis oriented ZnO nanorod arrays thus demonstrating the formation of defect free structure.

  2. Biological Inspired Stochastic Optimization Technique (PSO for DOA and Amplitude Estimation of Antenna Arrays Signal Processing in RADAR Communication System

    Directory of Open Access Journals (Sweden)

    Khurram Hammed

    2016-01-01

    Full Text Available This paper presents a stochastic global optimization technique known as Particle Swarm Optimization (PSO for joint estimation of amplitude and direction of arrival of the targets in RADAR communication system. The proposed scheme is an excellent optimization methodology and a promising approach for solving the DOA problems in communication systems. Moreover, PSO is quite suitable for real time scenario and easy to implement in hardware. In this study, uniform linear array is used and targets are supposed to be in far field of the arrays. Formulation of the fitness function is based on mean square error and this function requires a single snapshot to obtain the best possible solution. To check the accuracy of the algorithm, all of the results are taken by varying the number of antenna elements and targets. Finally, these results are compared with existing heuristic techniques to show the accuracy of PSO.

  3. Imaging RF Phased Array Receivers using Optically-Coherent Up-conversion for High Beam-Bandwidth Processing

    Science.gov (United States)

    2017-03-01

    It does so by using an optical lens to perform an inverse spatial Fourier Transform on the up-converted RF signals, thereby rendering a real-time... simultaneous beams or other engineered beam patterns. There are two general approaches to array-based beam forming: digital and analog. In digital beam...of significantly limiting the number of beams that can be formed simultaneously and narrowing the operational bandwidth. An alternate approach that

  4. Array capabilities and future arrays

    International Nuclear Information System (INIS)

    Radford, D.

    1993-01-01

    Early results from the new third-generation instruments GAMMASPHERE and EUROGAM are confirming the expectation that such arrays will have a revolutionary effect on the field of high-spin nuclear structure. When completed, GAMMASHPERE will have a resolving power am order of magnitude greater that of the best second-generation arrays. When combined with other instruments such as particle-detector arrays and fragment mass analysers, the capabilites of the arrays for the study of more exotic nuclei will be further enhanced. In order to better understand the limitations of these instruments, and to design improved future detector systems, it is important to have some intelligible and reliable calculation for the relative resolving power of different instrument designs. The derivation of such a figure of merit will be briefly presented, and the relative sensitivities of arrays currently proposed or under construction presented. The design of TRIGAM, a new third-generation array proposed for Chalk River, will also be discussed. It is instructive to consider how far arrays of Compton-suppressed Ge detectors could be taken. For example, it will be shown that an idealised open-quote perfectclose quotes third-generation array of 1000 detectors has a sensitivity an order of magnitude higher again than that of GAMMASPHERE. Less conventional options for new arrays will also be explored

  5. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    Directory of Open Access Journals (Sweden)

    Peter Kampmann

    2014-04-01

    Full Text Available With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach.

  6. Development of a multitechnology FPGA: a reconfigurable architecture for photonic information processing

    Science.gov (United States)

    Mal, Prosenjit; Toshniwal, Kavita; Hawk, Chris; Bhadri, Prashant R.; Beyette, Fred R., Jr.

    2004-06-01

    Over the years, Field Programmable Gate Arrays (FPGAs) have made a profound impact on the electronics industry with rapidly improving semiconductor-manufacturing technology ranging from sub-micron to deep sub-micron processes and equally innovative CAD tools. Though FPGA has revolutionized programmable/reconfigurable digital logic technology, one limitation of current FPGA"s is that the user is limited to strictly electronic designs. Thus, they are not suitable for applications that are not purely electronic, such as optical communications, photonic information processing systems and other multi-technology applications (ex. analog devices, MEMS devices and microwave components). Over recent years, the growing trend has been towards the incorporation of non-traditional device technologies into traditional CMOS VLSI systems. The integration of these technologies requires a new kind of FPGA that can merge conventional FPGA technology with photonic and other multi-technology devices. The proposed new class of field programmable device will extend the flexibility, rapid prototyping and reusability benefits associated with conventional electronic into photonic and multi-technology domain and give rise to the development of a wider class of programmable and embedded integrated systems. This new technology will create a tremendous opportunity for applying the conventional programmable/reconfigurable hardware concepts in other disciplines like photonic information processing. To substantiate this novel architectural concept, we have fabricated proof-of-the-concept CMOS VLSI Multi-technology FPGA (MT-FPGA) chips that include both digital field programmable logic blocks and threshold programmable photoreceivers which are suitable for sensing optical signals. Results from these chips strongly support the feasibility of this new optoelectronic device concept.

  7. A programmable systolic array correlator as a trigger processor for electron pairs in rich (ring image Cherenkov) counters

    Science.gov (United States)

    Männer, R.

    1989-12-01

    This paper describes a systolic array processor for a ring image Cherenkov counter which is capable of identifying pairs of electron circles with a known radius and a certain minimum distance within 15 μs. The processor is a very flexible and fast device. It consists of 128 x 128 processing elements (PEs), where one PE is assigned to each pixel of the image. All PEs run synchronously at 40 MHz. The identification of electron circles is done by correlating the detector image with the proper circle circumference. Circle centers are found by peak detection in the correlation result. A second correlation with a circle disc allows circles of closed electron pairs to be rejected. The trigger decision is generated if a pseudo adder detects at least two remaining circles. The device is controlled by a freely programmable sequencer. A VLSI chip containing 8 x 8 PEs is being developed using a VENUS design system and will be produced in 2μ CMOS technology.

  8. A programmable systolic array correlator as a trigger processor for electron pairs in RICH (ring image Cherenkov) counters

    International Nuclear Information System (INIS)

    Maenner, R.

    1989-01-01

    This paper describes a systolic array processor for a ring image Cherenkov counter which is capable of identifying pairs of electron circles with a known radius and a certain minimum distance within 15 μs. The processor is a very flexible and fast device. It consists of 128x128 processing elements (PEs), where one PE is assigned to each pixel of the image. All PEs run synchronously at 40 MHz. The identification of electron circles is done by correlating the detector image with the proper circle circumference. Circle centers are found by peak detection in the correlation result. A second correlation with a circle disc allows circles of closed electron pairs to be rejected. The trigger decision is generated if a pseudo adder detects at least two remaining circles. The device is controlled by a freely programmable sequencer. A VLSI chip containing 8x8 PEs is being developed using a VENUS design system and will be produced in 2μ CMOS technology. (orig.)

  9. SNP Arrays

    Directory of Open Access Journals (Sweden)

    Jari Louhelainen

    2016-10-01

    Full Text Available The papers published in this Special Issue “SNP arrays” (Single Nucleotide Polymorphism Arrays focus on several perspectives associated with arrays of this type. The range of papers vary from a case report to reviews, thereby targeting wider audiences working in this field. The research focus of SNP arrays is often human cancers but this Issue expands that focus to include areas such as rare conditions, animal breeding and bioinformatics tools. Given the limited scope, the spectrum of papers is nothing short of remarkable and even from a technical point of view these papers will contribute to the field at a general level. Three of the papers published in this Special Issue focus on the use of various SNP array approaches in the analysis of three different cancer types. Two of the papers concentrate on two very different rare conditions, applying the SNP arrays slightly differently. Finally, two other papers evaluate the use of the SNP arrays in the context of genetic analysis of livestock. The findings reported in these papers help to close gaps in the current literature and also to give guidelines for future applications of SNP arrays.

  10. Operation of a Fast-RICH Prototype with VLSI readout electronics

    Energy Technology Data Exchange (ETDEWEB)

    Guyonnet, J.L. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Arnold, R. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Jobez, J.P. (Coll. de France, 75 - Paris (France)); Seguinot, J. (Coll. de France, 75 - Paris (France)); Ypsilantis, T. (Coll. de France, 75 - Paris (France)); Chesi, E. (CERN / ECP Div., Geneve (Switzerland)); Racz, A. (CERN / ECP Div., Geneve (Switzerland)); Egger, J. (Paul Scherrer Inst., Villigen (Switzerland)); Gabathuler, K. (Paul Scherrer Inst., Villigen (Switzerland)); Joram, C. (Karlsruhe Univ. (Germany)); Adachi, I. (KEK, Tsukuba (Japan)); Enomoto, R. (KEK, Tsukuba (Japan)); Sumiyoshi, T. (KEK, Tsukuba (Japan))

    1994-04-01

    We discuss the first test results, obtained with cosmic rays, of a full-scale Fast-RICH Prototype with proximity-focused 10 mm thick LiF (CaF[sub 2]) solid radiators, TEA as photosensor in CH[sub 4], and readout of 12 x 10[sup 3] cathode pads (5.334 x 6.604 mm[sup 2]) using dedicated VLSI electronics we have developed. The number of detected photoelectrons is 7.7 (6.9) for the CaF[sub 2] (LiF) radiator, very near to the expected values 6.4 (7.5) from Monte Carlo simulations. The single-photon Cherenkov angle resolution [sigma][sub [theta

  11. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-01-01

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193

  12. Analog VLSI Models of Range-Tuned Neurons in the Bat Echolocation System

    Directory of Open Access Journals (Sweden)

    Horiuchi Timothy

    2003-01-01

    Full Text Available Bat echolocation is a fascinating topic of research for both neuroscientists and engineers, due to the complex and extremely time-constrained nature of the problem and its potential for application to engineered systems. In the bat's brainstem and midbrain exist neural circuits that are sensitive to the specific difference in time between the outgoing sonar vocalization and the returning echo. While some of the details of the neural mechanisms are known to be species-specific, a basic model of reafference-triggered, postinhibitory rebound timing is reasonably well supported by available data. We have designed low-power, analog VLSI circuits to mimic this mechanism and have demonstrated range-dependent outputs for use in a real-time sonar system. These circuits are being used to implement range-dependent vocalization amplitude, vocalization rate, and closest target isolation.

  13. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm.

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-08-13

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  14. Real time track finding in a drift chamber with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-01-01

    In a test setup, a hardware neural network determined track parameters of charged particles traversing a drift chamber. Voltages proportional to the drift times in 6 cells of the 3-layer chamber were inputs to the Intel ETANN neural network chip which had been trained to give the slope and intercept of tracks. We compare network track parameters to those obtained from off-line track fits. To our knowledge this is the first on-line application of a VLSI neural network to a high energy physics detector. This test explored the potential of the chip and the practical problems of using it in a real world setting. We compare the chip performance to a neural network simulation on a conventional computer. We discuss possible applications of the chip in high energy physics detector triggers. (orig.)

  15. Ant System-Corner Insertion Sequence: An Efficient VLSI Hard Module Placer

    Directory of Open Access Journals (Sweden)

    HOO, C.-S.

    2013-02-01

    Full Text Available Placement is important in VLSI physical design as it determines the time-to-market and chip's reliability. In this paper, a new floorplan representation which couples with Ant System, namely Corner Insertion Sequence (CIS is proposed. Though CIS's search complexity is smaller than the state-of-the-art representation Corner Sequence (CS, CIS adopts a preset boundary on the placement and hence, leading to search bound similar to CS. This enables the previous unutilized corner edges to become viable. Also, the redundancy of CS representation is eliminated in CIS leads to a lower search complexity of CIS. Experimental results on Microelectronics Center of North Carolina (MCNC hard block benchmark circuits show that the proposed algorithm performs comparably in terms of area yet at least two times faster than CS.

  16. A parallel VLSI architecture for a digital filter of arbitrary length using Fermat number transforms

    Science.gov (United States)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1982-01-01

    A parallel architecture for computation of the linear convolution of two sequences of arbitrary lengths using the Fermat number transform (FNT) is described. In particular a pipeline structure is designed to compute a 128-point FNT. In this FNT, only additions and bit rotations are required. A standard barrel shifter circuit is modified so that it performs the required bit rotation operation. The overlap-save method is generalized for the FNT to compute a linear convolution of arbitrary length. A parallel architecture is developed to realize this type of overlap-save method using one FNT and several inverse FNTs of 128 points. The generalized overlap save method alleviates the usual dynamic range limitation in FNTs of long transform lengths. Its architecture is regular, simple, and expandable, and therefore naturally suitable for VLSI implementation.

  17. electrode array

    African Journals Online (AJOL)

    PROF EKWUEME

    A geoelectric investigation employing vertical electrical soundings (VES) using the Ajayi - Makinde Two-Electrode array and the ... arrangements used in electrical D.C. resistivity survey. These include ..... Refraction Tomography to Study the.

  18. Low-cost Solar Array Project. Feasibility of the Silane Process for Producing Semiconductor-grade Silicon

    Science.gov (United States)

    1979-01-01

    The feasibility of Union Carbide's silane process for commercial application was established. An integrated process design for an experimental process system development unit and a commercial facility were developed. The corresponding commercial plant economic performance was then estimated.

  19. VLSI Research

    Science.gov (United States)

    1984-04-01

    Interpretation of IMMEDIATE fields of instructions (except ldhi ): W (c) (d) (e) sssssssssssss s imml9 sssssssssssssssssss...s imml3 Destination REGISTER of a LDHI instruction: imml9 0000000000000 Data in REGISTERS when operated upon: 32-bit quantity...Oll x l OOOO OOOl calli sll OOlO getpsw sra xxzOOll getlpc srl OlOO putpsw ldhi OlOl and zzzOllO or ldxw stxw Olll xor

  20. VLSI Implementation of a Fixed-Complexity Soft-Output MIMO Detector for High-Speed Wireless

    Directory of Open Access Journals (Sweden)

    Di Wu

    2010-01-01

    Full Text Available This paper presents a low-complexity MIMO symbol detector with close-Maximum a posteriori performance for the emerging multiantenna enhanced high-speed wireless communications. The VLSI implementation is based on a novel MIMO detection algorithm called Modified Fixed-Complexity Soft-Output (MFCSO detection, which achieves a good trade-off between performance and implementation cost compared to the referenced prior art. By including a microcode-controlled channel preprocessing unit and a pipelined detection unit, it is flexible enough to cover several different standards and transmission schemes. The flexibility allows adaptive detection to minimize power consumption without degradation in throughput. The VLSI implementation of the detector is presented to show that real-time MIMO symbol detection of 20 MHz bandwidth 3GPP LTE and 10 MHz WiMAX downlink physical channel is achievable at reasonable silicon cost.

  1. A new VLSI complex integer multiplier which uses a quadratic-polynomial residue system with Fermat numbers

    Science.gov (United States)

    Shyu, H. C.; Reed, I. S.; Truong, T. K.; Hsu, I. S.; Chang, J. J.

    1987-01-01

    A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-Pw technology.

  2. Process development for automated solar cell and module production. Task 4. Automated array assembly. Quarterly report No. 1

    Energy Technology Data Exchange (ETDEWEB)

    Hagerty, J. J.

    1980-10-15

    Work has been divided into five phases. The first phase is to modify existing hardware and controlling computer software to: (1) improve cell-to-cell placement accuracy, (2) improve the solder joint while reducing the amount of solder and flux smear on the cell's surface, and (3) reduce the system cycle time to 10 seconds. The second phase involves expanding the existing system's capabilities to be able to reject broken cells and make post-solder electrical tests. Phase 3 involves developing new hardware to allow for the automated encapsulation of solar modules. This involves three discrete pieces of hardware: (1) a vacuum platen end effector for the robot which allows it to pick up the 1' x 4' array of 35 inter-connected cells. With this, it can also pick up the cover glass and completed module, (2) a lamination preparation station which cuts the various encapsulation components from roll storage and positions them for encapsulation, and (3) an automated encapsulation chamber which interfaces with the above two and applies the heat and vacuum to cure the encapsulants. Phase 4 involves the final assembly of the encapsulated array into a framed, edge-sealed module completed for installation. For this we are using MBA's Glass Reinforced Concrete (GRC) in panels such as those developed by MBA for JPL under contract No. 955281. The GRC panel plays the multiple role of edge frame, substrate and mounting structure. An automated method of applying the edge seal will also be developed. The final phase (5) is the fabrication of six 1' x 4' electrically active solar modules using the above developed equipment. Progress is reported. (WHK)

  3. Determination of Rayleigh wave ellipticity using single-station and array-based processing of ambient seismic noise

    Science.gov (United States)

    Workman, Eli Joseph

    We present a single-station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multicomponent, multistation noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 sec the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 sec, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher ( 2%) and significantly higher (>20%), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e., Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves.

  4. VLSI System Implementation of 200 MHz, 8-bit, 90nm CMOS Arithmetic and Logic Unit (ALU Processor Controller

    Directory of Open Access Journals (Sweden)

    Fazal NOORBASHA

    2012-08-01

    Full Text Available In this present study includes the Very Large Scale Integration (VLSI system implementation of 200MHz, 8-bit, 90nm Complementary Metal Oxide Semiconductor (CMOS Arithmetic and Logic Unit (ALU processor control with logic gate design style and 0.12µm six metal 90nm CMOS fabrication technology. The system blocks and the behaviour are defined and the logical design is implemented in gate level in the design phase. Then, the logic circuits are simulated and the subunits are converted in to 90nm CMOS layout. Finally, in order to construct the VLSI system these units are placed in the floor plan and simulated with analog and digital, logic and switch level simulators. The results of the simulations indicates that the VLSI system can control different instructions which can divided into sub groups: transfer instructions, arithmetic and logic instructions, rotate and shift instructions, branch instructions, input/output instructions, control instructions. The data bus of the system is 16-bit. It runs at 200MHz, and operating power is 1.2V. In this paper, the parametric analysis of the system, the design steps and obtained results are explained.

  5. Wideband aperture array using RF channelizers and massively parallel digital 2D IIR filterbank

    Science.gov (United States)

    Sengupta, Arindam; Madanayake, Arjuna; Gómez-García, Roberto; Engeberg, Erik D.

    2014-05-01

    Wideband receive-mode beamforming applications in wireless location, electronically-scanned antennas for radar, RF sensing, microwave imaging and wireless communications require digital aperture arrays that offer a relatively constant far-field beam over several octaves of bandwidth. Several beamforming schemes including the well-known true time-delay and the phased array beamformers have been realized using either finite impulse response (FIR) or fast Fourier transform (FFT) digital filter-sum based techniques. These beamforming algorithms offer the desired selectivity at the cost of a high computational complexity and frequency-dependant far-field array patterns. A novel approach to receiver beamforming is the use of massively parallel 2-D infinite impulse response (IIR) fan filterbanks for the synthesis of relatively frequency independent RF beams at an order of magnitude lower multiplier complexity compared to FFT or FIR filter based conventional algorithms. The 2-D IIR filterbanks demand fast digital processing that can support several octaves of RF bandwidth, fast analog-to-digital converters (ADCs) for RF-to-bits type direct conversion of wideband antenna element signals. Fast digital implementation platforms that can realize high-precision recursive filter structures necessary for real-time beamforming, at RF radio bandwidths, are also desired. We propose a novel technique that combines a passive RF channelizer, multichannel ADC technology, and single-phase massively parallel 2-D IIR digital fan filterbanks, realized at low complexity using FPGA and/or ASIC technology. There exists native support for a larger bandwidth than the maximum clock frequency of the digital implementation technology. We also strive to achieve More-than-Moore throughput by processing a wideband RF signal having content with N-fold (B = N Fclk/2) bandwidth compared to the maximum clock frequency Fclk Hz of the digital VLSI platform under consideration. Such increase in bandwidth is

  6. Fabrication of micro-dot arrays and micro-walls of acrylic acid/melamine resin on aluminum by AFM probe processing and electrophoretic coating

    International Nuclear Information System (INIS)

    Kurokawa, S.; Kikuchi, T.; Sakairi, M.; Takahashi, H.

    2008-01-01

    Micro-dot arrays and micro-walls of acrylic acid/melamine resin were fabricated on aluminum by anodizing, atomic force microscope (AFM) probe processing, and electrophoretic deposition. Barrier type anodic oxide films of 15 nm thickness were formed on aluminum and then the specimen was scratched with an AFM probe in a solution containing acrylic acid/melamine resin nano-particles to remove the anodic oxide film locally. After scratching, the specimen was anodically polarized to deposit acrylic acid/melamine resin electrophoretically at the film-removed area. The resin deposited on the specimen was finally cured by heating. It was found that scratching with the AFM probe on open circuit leads to the contamination of the probe with resin, due to positive shifts in the potential during scratching. Scratching of the specimen under potentiostatic conditions at -1.0 V, however, resulted in successful resin deposition at the film-removed area without probe contamination. The rate of resin deposition increased as the specimen potential becomes more positive during electrophoretic deposition. Arrays of resin dots with a few to several tens μm diameter and 100-1000 nm height, and resin walls with 100-1000 nm height and 1 μm width were obtained on specimens by successive anodizing, probe processing, and electrophoretic deposition

  7. Fabrication of micro-dot arrays and micro-walls of acrylic acid/melamine resin on aluminum by AFM probe processing and electrophoretic coating

    Energy Technology Data Exchange (ETDEWEB)

    Kurokawa, S.; Kikuchi, T.; Sakairi, M. [Graduate School of Engineering, Hokkaido University, N-13, W-8, Kita-Ku, Sapporo 060-8628 (Japan); Takahashi, H. [Graduate School of Engineering, Hokkaido University, N-13, W-8, Kita-Ku, Sapporo 060-8628 (Japan)], E-mail: takahasi@elechem1-mc.eng.hokudai.ac.jp

    2008-11-30

    Micro-dot arrays and micro-walls of acrylic acid/melamine resin were fabricated on aluminum by anodizing, atomic force microscope (AFM) probe processing, and electrophoretic deposition. Barrier type anodic oxide films of 15 nm thickness were formed on aluminum and then the specimen was scratched with an AFM probe in a solution containing acrylic acid/melamine resin nano-particles to remove the anodic oxide film locally. After scratching, the specimen was anodically polarized to deposit acrylic acid/melamine resin electrophoretically at the film-removed area. The resin deposited on the specimen was finally cured by heating. It was found that scratching with the AFM probe on open circuit leads to the contamination of the probe with resin, due to positive shifts in the potential during scratching. Scratching of the specimen under potentiostatic conditions at -1.0 V, however, resulted in successful resin deposition at the film-removed area without probe contamination. The rate of resin deposition increased as the specimen potential becomes more positive during electrophoretic deposition. Arrays of resin dots with a few to several tens {mu}m diameter and 100-1000 nm height, and resin walls with 100-1000 nm height and 1 {mu}m width were obtained on specimens by successive anodizing, probe processing, and electrophoretic deposition.

  8. Filter arrays

    Science.gov (United States)

    Page, Ralph H.; Doty, Patrick F.

    2017-08-01

    The various technologies presented herein relate to a tiled filter array that can be used in connection with performance of spatial sampling of optical signals. The filter array comprises filter tiles, wherein a first plurality of filter tiles are formed from a first material, the first material being configured such that only photons having wavelengths in a first wavelength band pass therethrough. A second plurality of filter tiles is formed from a second material, the second material being configured such that only photons having wavelengths in a second wavelength band pass therethrough. The first plurality of filter tiles and the second plurality of filter tiles can be interspersed to form the filter array comprising an alternating arrangement of first filter tiles and second filter tiles.

  9. Functional response of osteoblasts in functionally gradient titanium alloy mesh arrays processed by 3D additive manufacturing.

    Science.gov (United States)

    Nune, K C; Kumar, A; Misra, R D K; Li, S J; Hao, Y L; Yang, R

    2017-02-01

    We elucidate here the osteoblasts functions and cellular activity in 3D printed interconnected porous architecture of functionally gradient Ti-6Al-4V alloy mesh structures in terms of cell proliferation and growth, distribution of cell nuclei, synthesis of proteins (actin, vinculin, and fibronectin), and calcium deposition. Cell culture studies with pre-osteoblasts indicated that the interconnected porous architecture of functionally gradient mesh arrays was conducive to osteoblast functions. However, there were statistically significant differences in the cellular response depending on the pore size in the functionally gradient structure. The interconnected porous architecture contributed to the distribution of cells from the large pore size (G1) to the small pore size (G3), with consequent synthesis of extracellular matrix and calcium precipitation. The gradient mesh structure significantly impacted cell adhesion and influenced the proliferation stage, such that there was high distribution of cells on struts of the gradient mesh structure. Actin and vinculin showed a significant difference in normalized expression level of protein per cell, which was absent in the case of fibronectin. Osteoblasts present on mesh struts formed a confluent sheet, bridging the pores through numerous cytoplasmic extensions. The gradient mesh structure fabricated by electron beam melting was explored to obtain fundamental insights on cellular activity with respect to osteoblast functions. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Supercritical CO2 extraction of candlenut oil: process optimization using Taguchi orthogonal array and physicochemical properties of the oil.

    Science.gov (United States)

    Subroto, Erna; Widjojokusumo, Edward; Veriansyah, Bambang; Tjandrawinata, Raymond R

    2017-04-01

    A series of experiments was conducted to determine optimum conditions for supercritical carbon dioxide extraction of candlenut oil. A Taguchi experimental design with L 9 orthogonal array (four factors in three levels) was employed to evaluate the effects of pressure of 25-35 MPa, temperature of 40-60 °C, CO 2 flow rate of 10-20 g/min and particle size of 0.3-0.8 mm on oil solubility. The obtained results showed that increase in particle size, pressure and temperature improved the oil solubility. The supercritical carbon dioxide extraction at optimized parameters resulted in oil yield extraction of 61.4% at solubility of 9.6 g oil/kg CO 2 . The obtained candlenut oil from supercritical carbon dioxide extraction has better oil quality than oil which was extracted by Soxhlet extraction using n-hexane. The oil contains high unsaturated oil (linoleic acid and linolenic acid), which have many beneficial effects on human health.

  11. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    Science.gov (United States)

    Goldman, H.; Wolf, M.

    1979-01-01

    Analyses of slicing processes and junction formation processes are presented. A simple method for evaluation of the relative economic merits of competing process options with respect to the cost of energy produced by the system is described. An energy consumption analysis was developed and applied to determine the energy consumption in the solar module fabrication process sequence, from the mining of the SiO2 to shipping. The analysis shows that, in current technology practice, inordinate energy use in the purification step, and large wastage of the invested energy through losses, particularly poor conversion in slicing, as well as inadequate yields throughout. The cell process energy expenditures already show a downward trend based on increased throughput rates. The large improvement, however, depends on the introduction of a more efficient purification process and of acceptable ribbon growing techniques.

  12. Programmable cellular arrays. Faults testing and correcting in cellular arrays

    International Nuclear Information System (INIS)

    Cercel, L.

    1978-03-01

    A review of some recent researches about programmable cellular arrays in computing and digital processing of information systems is presented, and includes both combinational and sequential arrays, with full arbitrary behaviour, or which can realize better implementations of specialized blocks as: arithmetic units, counters, comparators, control systems, memory blocks, etc. Also, the paper presents applications of cellular arrays in microprogramming, in implementing of a specialized computer for matrix operations, in modeling of universal computing systems. The last section deals with problems of fault testing and correcting in cellular arrays. (author)

  13. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    Science.gov (United States)

    Wolf, M.

    1981-01-01

    The effect of solar cell metallization pattern design on solar cell performance and the costs and performance effects of different metallization processes are discussed. Definitive design rules for the front metallization pattern for large area solar cells are presented. Chemical and physical deposition processes for metallization are described and compared. An economic evaluation of the 6 principal metallization options is presented. Instructions for preparing Format A cost data for solar cell manufacturing processes from UPPC forms for input into the SAMIC computer program are presented.

  14. Low Cost Solar Array Project. Feasibility of the silane process for producing semiconductor-grade silicon. Final report, October 1975-March 1979

    Energy Technology Data Exchange (ETDEWEB)

    1979-06-01

    The commercial production of low-cost semiconductor-grade silicon is an essential requirement of the JPL/DOE (Department of Energy) Low-Cost Solar Array (LSA) Project. A 1000-metric-ton-per-year commercial facility using the Union Carbide Silane Process will produce molten silicon for an estimated price of $7.56/kg (1975 dollars, private financing), meeting the DOE goal of less than $10/kg. Conclusions and technology status are reported for both contract phases, which had the following objectives: (1) establish the feasibility of Union Carbide's Silane Process for commercial application, and (2) develop an integrated process design for an Experimental Process System Development Unit (EPSDU) and a commercial facility, and estimate the corresponding commercial plant economic performance. To assemble the facility design, the following work was performed: (a) collection of Union Carbide's applicable background technology; (b) design, assembly, and operation of a small integrated silane-producing Process Development Unit (PDU); (c) analysis, testing, and comparison of two high-temperature methods for converting pure silane to silicon metal; and (d) determination of chemical reaction equilibria and kinetics, and vapor-liquid equilibria for chlorosilanes.

  15. Analysis and evaluation of process and equipment in tasks 2 and 4 of the Low Cost Solar Array project

    Science.gov (United States)

    Goldman, H.; Wolf, M.

    1978-01-01

    Several experimental and projected Czochralski crystal growing process methods were studied and compared to available operations and cost-data of recent production Cz-pulling, in order to elucidate the role of the dominant cost contributing factors. From this analysis, it becomes apparent that substantial cost reductions can be realized from technical advancements which fall into four categories: an increase in furnace productivity; the reduction of crucible cost through use of the crucible for the equivalent of multiple state-of-the-art crystals; the combined effect of several smaller technical improvements; and a carry over effect of the expected availability of semiconductor grade polysilicon at greatly reduced prices. A format for techno-economic analysis of solar cell production processes was developed, called the University of Pennsylvania Process Characterization (UPPC) format. The accumulated Cz process data are presented.

  16. Process research of non-cz silicon material. Low cost solar array project, cell and module formation research area

    Science.gov (United States)

    1982-01-01

    Liquid diffusion masks and liquid applied dopants to replace the CVD Silox masking and gaseous diffusion operations specified for forming junctions in the Westinghouse baseline process sequence for producing solar cells from dendritic web silicon were investigated.

  17. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Directory of Open Access Journals (Sweden)

    Ying-Lun Chen

    2015-08-01

    Full Text Available A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO, and the feature extraction is carried out by the generalized Hebbian algorithm (GHA. To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  18. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  19. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  20. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Science.gov (United States)

    McEwan, Alistair; van Schaik, André

    2003-12-01

    The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a) rate level functions for onset and steady-state response, (b) recovery after masking, (c) additivity, (d) two-component adaptation, (e) phase locking, (f) recovery of spontaneous activity, and (g) computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  1. Robust working memory in an asynchronously spiking neural network realized in neuromorphic VLSI

    Directory of Open Access Journals (Sweden)

    Massimiliano eGiulioni

    2012-02-01

    Full Text Available We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory of integrate-and-fire (LIF neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of ‘high’ and ‘low’-firing activity. Depending on the overall excitability, transitions to the ‘high’ state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the ‘high’ state retains a working memory of a stimulus until well after its release. In the latter case, ‘high’ states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated corrupted ‘high’ states comprising neurons of both excitatory populations. Within a basin of attraction, the network dynamics corrects such states and re-establishes the prototypical ‘high’ state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  2. Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI.

    Science.gov (United States)

    Giulioni, Massimiliano; Camilleri, Patrick; Mattia, Maurizio; Dante, Vittorio; Braun, Jochen; Del Giudice, Paolo

    2011-01-01

    We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of "high" and "low"-firing activity. Depending on the overall excitability, transitions to the "high" state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the "high" state retains a "working memory" of a stimulus until well after its release. In the latter case, "high" states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated "corrupted" "high" states comprising neurons of both excitatory populations. Within a "basin of attraction," the network dynamics "corrects" such states and re-establishes the prototypical "high" state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  3. Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI.

    Science.gov (United States)

    Yu, T; Sejnowski, T J; Cauwenberghs, G

    2011-10-01

    We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris-Lecar and Hodgkin-Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces.

  4. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Directory of Open Access Journals (Sweden)

    Alistair McEwan

    2003-06-01

    Full Text Available The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a rate level functions for onset and steady-state response, (b recovery after masking, (c additivity, (d two-component adaptation, (e phase locking, (f recovery of spontaneous activity, and (g computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  5. Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.

    Science.gov (United States)

    Yu, Theodore; Cauwenberghs, Gert

    2009-01-01

    We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.

  6. Implementation of neuromorphic systems: from discrete components to analog VLSI chips (testing and communication issues).

    Science.gov (United States)

    Dante, V; Del Giudice, P; Mattia, M

    2001-01-01

    We review a series of implementations of electronic devices aiming at imitating to some extent structure and function of simple neural systems, with particular emphasis on communication issues. We first provide a short overview of general features of such "neuromorphic" devices and the implications of setting up "tests" for them. We then review the developments directly related to our work at the Istituto Superiore di Sanità (ISS): a pilot electronic neural network implementing a simple classifier, autonomously developing internal representations of incoming stimuli; an output network, collecting information from the previous classifier and extracting the relevant part to be forwarded to the observer; an analog, VLSI (very large scale integration) neural chip implementing a recurrent network of spiking neurons and plastic synapses, and the test setup for it; a board designed to interface the standard PCI (peripheral component interconnect) bus of a PC with a special purpose, asynchronous bus for communication among neuromorphic chips; a short and preliminary account of an application-oriented device, taking advantage of the above communication infrastructure.

  7. CASTOR a VLSI CMOS mixed analog-digital circuit for low noise multichannel counting applications

    International Nuclear Information System (INIS)

    Comes, G.; Loddo, F.; Hu, Y.; Kaplon, J.; Ly, F.; Turchetta, R.; Bonvicini, V.; Vacchi, A.

    1996-01-01

    In this paper we present the design and first experimental results of a VLSI mixed analog-digital 1.2 microns CMOS circuit (CASTOR) for multichannel radiation detectors applications demanding low noise amplification and counting of radiation pulses. This circuit is meant to be connected to pixel-like detectors. Imaging can be obtained by counting the number of hits in each pixel during a user-controlled exposure time. Each channel of the circuit features an analog and a digital part. In the former one, a charge preamplifier is followed by a CR-RC shaper with an output buffer and a threshold discriminator. In the digital part, a 16-bit counter is present together with some control logic. The readout of the counters is done serially on a common tri-state output. Daisy-chaining is possible. A 4-channel prototype has been built. This prototype has been optimised for use in the digital radiography Syrmep experiment at the Elettra synchrotron machine in Trieste (Italy): its main design parameters are: shaping time of about 850 ns, gain of 190 mV/fC and ENC (e - rms)=60+17 C (pF). The counting rate per channel, limited by the analog part, can be as high as about 200 kHz. Characterisation of the circuit and first tests with silicon microstrip detectors are presented. They show the circuit works according to design specification and can be used for imaging applications. (orig.)

  8. A novel VLSI processor for high-rate, high resolution spectroscopy

    CERN Document Server

    Pullia, Antonio; Gatti, E; Longoni, A; Buttler, W

    2000-01-01

    A novel time-variant VLSI shaper amplifier, suitable for multi-anode Silicon Drift Detectors or other multi-element solid-state X-ray detection systems, is proposed. The new read-out scheme has been conceived for demanding applications with synchrotron light sources, such as X-ray holography or EXAFS, where both high count-rates and high-energy resolutions are required. The circuit is of the linear time-variant class, accepts randomly distributed events and features: a finite-width (1-10 mu s) quasi-optimal weight function, an ultra-low-level energy discrimination (approx 150 eV), and a full compatibility for monolithic integration in CMOS technology. Its impulse response has a staircase-like shape, but the weight function (which is in general different from the impulse response in time-variant systems) is quasi trapezoidal. The operation principles of the new scheme as well as the first experimental results obtained with a prototype of the circuit are presented and discussed in the work.

  9. VLSI ARCHITECTURE FOR IMAGE COMPRESSION THROUGH ADDER MINIMIZATION TECHNIQUE AT DCT STRUCTURE

    Directory of Open Access Journals (Sweden)

    N.R. Divya

    2014-08-01

    Full Text Available Data compression plays a vital role in multimedia devices to present the information in a succinct frame. Initially, the DCT structure is used for Image compression, which has lesser complexity and area efficient. Similarly, 2D DCT also has provided reasonable data compression, but implementation concern, it calls more multipliers and adders thus its lead to acquire more area and high power consumption. To contain an account of all, this paper has been dealt with VLSI architecture for image compression using Rom free DA based DCT (Discrete Cosine Transform structure. This technique provides high-throughput and most suitable for real-time implementation. In order to achieve this image matrix is subdivided into odd and even terms then the multiplication functions are removed by shift and add approach. Kogge_Stone_Adder techniques are proposed for obtaining a bit-wise image quality which determines the new trade-off levels as compared to the previous techniques. Overall the proposed architecture produces reduced memory, low power consumption and high throughput. MATLAB is used as a funding tool for receiving an input pixel and obtaining output image. Verilog HDL is used for implementing the design, Model Sim for simulation, Quatres II is used to synthesize and obtain details about power and area.

  10. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  11. A Single Chip VLSI Implementation of a QPSK/SQPSK Demodulator for a VSAT Receiver Station

    Science.gov (United States)

    Kwatra, S. C.; King, Brent

    1995-01-01

    This thesis presents a VLSI implementation of a QPSK/SQPSK demodulator. It is designed to be employed in a VSAT earth station that utilizes the FDMA/TDM link. A single chip architecture is used to enable this chip to be easily employed in the VSAT system. This demodulator contains lowpass filters, integrate and dump units, unique word detectors, a timing recovery unit, a phase recovery unit and a down conversion unit. The design stages start with a functional representation of the system by using the C programming language. Then it progresses into a register based representation using the VHDL language. The layout components are designed based on these VHDL models and simulated. Component generators are developed for the adder, multiplier, read-only memory and serial access memory in order to shorten the design time. These sub-components are then block routed to form the main components of the system. The main components are block routed to form the final demodulator.

  12. Digital VLSI design with Verilog a textbook from Silicon Valley Polytechnic Institute

    CERN Document Server

    Williams, John Michael

    2014-01-01

    This book is structured as a step-by-step course of study along the lines of a VLSI integrated circuit design project.  The entire Verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer-deserializer, including synthesizable PLLs.  The author includes everything an engineer needs for in-depth understanding of the Verilog language:  Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided in the downloadable files that accompany the book.  For readers with access to appropriate electronic design tools, all solutions can be developed, simulated, and synthesized as described in the book.   A partial list of design topics includes design partitioning, hierarchy decomposition, safe coding styles, back annotation, wrapper modules, concurrency, race conditions, assertion-based verification, clock synchronization, and design for test.   A concluding presentation of special topics inclu...

  13. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  14. Design of 10Gbps optical encoder/decoder structure for FE-OCDMA system using SOA and opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Hwang, Seow; Alameh, Kamal

    2008-01-21

    In this paper we propose and experimentally demonstrate a reconfigurable 10Gbps frequency-encoded (1D) encoder/decoder structure for optical code division multiple access (OCDMA). The encoder is constructed using a single semiconductor optical amplifier (SOA) and 1D reflective Opto-VLSI processor. The SOA generates broadband amplified spontaneous emission that is dynamically sliced using digital phase holograms loaded onto the Opto-VLSI processor to generate 1D codewords. The selected wavelengths are injected back into the same SOA for amplifications. The decoder is constructed using single Opto-VLSI processor only. The encoded signal can successfully be retrieved at the decoder side only when the digital phase holograms of the encoder and the decoder are matched. The system performance is measured in terms of the auto-correlation and cross-correlation functions as well as the eye diagram.

  15. Tomographic array

    International Nuclear Information System (INIS)

    1976-01-01

    The configuration of a tomographic array in which the object can rotate about its axis is described. The X-ray detector is a cylindrical screen perpendicular to the axis of rotation. The X-ray source has a line-shaped focus coinciding with the axis of rotation. The beam is fan-shaped with one side of this fan lying along the axis of rotation. The detector screen is placed inside an X-ray image multiplier tube

  16. Tomographic array

    International Nuclear Information System (INIS)

    1976-01-01

    A tomographic array with the following characteristics is described. An X-ray screen serving as detector is placed before a photomultiplier tube which itself is placed in front of a television camera connected to a set of image processors. The detector is concave towards the source and is replacable. Different images of the object are obtained simultaneously. Optical fibers and lenses are used for transmission within the system

  17. Low cost solar array project cell and module formation research area: Process research of non-CZ silicon material

    Science.gov (United States)

    1981-01-01

    Liquid diffusion masks and liquid applied dopants to replace the CVD Silox masking and gaseous diffusion operations specified for forming junctions in the Westinghouse baseline process sequence for producing solar cells from dendritic web silicon were investigated. The baseline diffusion masking and drive processes were compared with those involving direct liquid applications to the dendritic web silicon strips. Attempts were made to control the number of variables by subjecting dendritic web strips cut from a single web crystal to both types of operations. Data generated reinforced earlier conclusions that efficiency levels at least as high as those achieved with the baseline back junction formation process can be achieved using liquid diffusion masks and liquid dopants. The deliveries of dendritic web sheet material and solar cells specified by the current contract were made as scheduled.

  18. Effect of thermal implying during ageing process of nanorods growth on the properties of zinc oxide nanorod arrays

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, A. S., E-mail: kyrin-samaxi@yahoo.com; Mamat, M. H., E-mail: mhmamat@salam.uitm.edu.my; Rusop, M., E-mail: rusop@salam.uitm.my [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); NANO-SciTech Centre (NST), Institute of Science (IOS), Universiti Teknologi MARA - UiTM, 40450 Shah Alam, Selangor (Malaysia); Malek, M. F., E-mail: firz-solarzelle@yahoo.com; Abdullah, M. A. R., E-mail: ameerridhwan89@gmail.com; Sin, M. D., E-mail: diyana0366@johor.uitm.edu.my [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia)

    2016-07-06

    Undoped and Sn-doped Zinc oxide (ZnO) nanostructures have been fabricated using a simple sol-gel immersion method at 95°C of growth temperature. Thermal sourced by hot plate stirrer was supplied to the solution during ageing process of nanorods growth. The results showed significant decrement in the quality of layer produced after the immersion process where the conductivity and porosity of the samples reduced significantly due to the thermal appliance. The structural properties of the samples have been characterized using field emission scanning electron microscopy (FESEM) electrical properties has been characterized using current voltage (I-V) measurement.

  19. Development of a Post-Processing Algorithm for Accurate Human Skull Profile Extraction via Ultrasonic Phased Arrays

    Science.gov (United States)

    Al-Ansary, Mariam Luay Y.

    Ultrasound Imaging has been favored by clinicians for its safety, affordability, accessibility, and speed compared to other imaging modalities. However, the trade-offs to these benefits are a relatively lower image quality and interpretability, which can be addressed by, for example, post-processing methods. One particularly difficult imaging case is associated with the presence of a barrier, such as a human skull, with significantly different acoustical properties than the brain tissue as the target medium. Some methods were proposed in the literature to account for this structure if the skull's geometry is known. Measuring the skull's geometry is therefore an important task that requires attention. In this work, a new edge detection method for accurate human skull profile extraction via post-processing of ultrasonic A-Scans is introduced. This method, referred to as the Selective Echo Extraction algorithm, SEE, processes each A-Scan separately and determines the outermost and innermost boundaries of the skull by means of adaptive filtering. The method can also be used to determine the average attenuation coefficient of the skull. When applied to simulated B-Mode images of the skull profile, promising results were obtained. The profiles obtained from the proposed process in simulations were found to be within 0.15lambda +/- 0.11lambda or 0.09 +/- 0.07mm from the actual profiles. Experiments were also performed to test SEE on skull mimicking phantoms with major acoustical properties similar to those of the actual human skull. With experimental data, the profiles obtained with the proposed process were within 0.32lambda +/- 0.25lambda or 0.19 +/- 0.15mm from the actual profile.

  20. Low cost solar array project. Cell and module formation research area. Process research of non-CZ silicon material

    Science.gov (United States)

    1983-01-01

    Liquid diffusion masks and liquid dopants to replace the more expensive CVD SiO2 mask and gaseous diffusion processes were investigated. Silicon pellets were prepared in the silicon shot tower; and solar cells were fabricated using web grown where the pellets were used as a replenishment material. Verification runs were made using the boron dopant and liquid diffusion mask materials. The average of cells produced in these runs was 13%. The relationship of sheet resistivity, temperature, gas flows, and gas composition for the diffusion of the P-8 liquid phosphorus solution was investigated. Solar cells processed from web grown from Si shot material were evaluated, and results qualified the use of the material produced in the shot tower for web furnace feed stock.

  1. Silicon materials task of the Low Cost Solar Array Project: Effect of impurities and processing on silicon solar cells

    Science.gov (United States)

    Hopkins, R. H.; Davis, J. R.; Rohatgi, A.; Hanes, M. H.; Rai-Choudhury, P.; Mollenkopf, H. C.

    1982-01-01

    The effects of impurities and processing on the characteristics of silicon and terrestrial silicon solar cells were defined in order to develop cost benefit relationships for the use of cheaper, less pure solar grades of silicon. The amount of concentrations of commonly encountered impurities that can be tolerated in typical p or n base solar cells was established, then a preliminary analytical model from which the cell performance could be projected depending on the kinds and amounts of contaminants in the silicon base material was developed. The impurity data base was expanded to include construction materials, and the impurity performace model was refined to account for additional effects such as base resistivity, grain boundary interactions, thermal processing, synergic behavior, and nonuniform impurity distributions. A preliminary assessment of long term (aging) behavior of impurities was also undertaken.

  2. Physico-topological methods of increasing stability of the VLSI circuit components to irradiation. Fiziko-topologhicheskie sposoby uluchsheniya radiatsionnoj stojkosti komponentov BIS

    Energy Technology Data Exchange (ETDEWEB)

    Pereshenkov, V S [MIFI, Moscow, (Russian Federation); Shishianu, F S; Rusanovskij, V I [S. Lazo KPI, Chisinau, (Moldova, Republic of)

    1992-01-01

    The paper presents the method used and the experimental results obtained for 8-bit microprocessor irradiated with [gamma]-rays and neutrons. The correlation between the electrical and technological parameters with the irradiation ones is revealed. The influence of leakage current between devices incorporated in VLSI circuits was studied. The obtained results create the possibility to determine the technological parameters necessary for designing the circuit able to work at predetermined doses. The necessary substrate doping concentration for isolation which eliminates the leakage current between devices prevents the VLSI circuit break down was determined. (Author).

  3. Chemical analysis of raw and processed Fructus arctii by high-performance liquid chromatography/diode array detection-electrospray ionization-mass spectrometry

    Science.gov (United States)

    Qin, Kunming; Liu, Qidi; Cai, Hao; Cao, Gang; Lu, Tulin; Shen, Baojia; Shu, Yachun; Cai, Baochang

    2014-01-01

    Background: In traditional Chinese medicine (TCM), raw and processed herbs are used to treat the different diseases. Fructus Arctii, the dried fruits of Arctium lappa l. (Compositae), is widely used in the TCM. Stir-frying is the most common processing method, which might modify the chemical compositions in Fructus Arctii. Materials and Methods: To test this hypothesis, we focused on analysis and identification of the main chemical constituents in raw and processed Fructus Arctii (PFA) by high-performance liquid chromatography/diode array detection-electrospray ionization-mass spectrometry. Results: The results indicated that there was less arctiin in stir-fried materials than in raw materials. however, there were higher levels of arctigenin in stir-fried materials than in raw materials. Conclusion: We suggest that arctiin reduced significantly following the thermal conversion of arctiin to arctigenin. In conclusion, this finding may shed some light on understanding the differences in the therapeutic values of raw versus PFA in TCM. PMID:25422559

  4. Modular and efficient ozone systems based on massively parallel chemical processing in microchannel plasma arrays: performance and commercialization

    Science.gov (United States)

    Kim, M.-H.; Cho, J. H.; Park, S.-J.; Eden, J. G.

    2017-08-01

    Plasmachemical systems based on the production of a specific molecule (O3) in literally thousands of microchannel plasmas simultaneously have been demonstrated, developed and engineered over the past seven years, and commercialized. At the heart of this new plasma technology is the plasma chip, a flat aluminum strip fabricated by photolithographic and wet chemical processes and comprising 24-48 channels, micromachined into nanoporous aluminum oxide, with embedded electrodes. By integrating 4-6 chips into a module, the mass output of an ozone microplasma system is scaled linearly with the number of modules operating in parallel. A 115 g/hr (2.7 kg/day) ozone system, for example, is realized by the combined output of 18 modules comprising 72 chips and 1,800 microchannels. The implications of this plasma processing architecture for scaling ozone production capability, and reducing capital and service costs when introducing redundancy into the system, are profound. In contrast to conventional ozone generator technology, microplasma systems operate reliably (albeit with reduced output) in ambient air and humidity levels up to 90%, a characteristic attributable to the water adsorption/desorption properties and electrical breakdown strength of nanoporous alumina. Extensive testing has documented chip and system lifetimes (MTBF) beyond 5,000 hours, and efficiencies >130 g/kWh when oxygen is the feedstock gas. Furthermore, the weight and volume of microplasma systems are a factor of 3-10 lower than those for conventional ozone systems of comparable output. Massively-parallel plasmachemical processing offers functionality, performance, and commercial value beyond that afforded by conventional technology, and is currently in operation in more than 30 countries worldwide.

  5. Diode temperature sensor array for measuring and controlling micro scale surface temperature

    International Nuclear Information System (INIS)

    Han, Il Young; Kim, Sung Jin

    2004-01-01

    The needs of micro scale thermal detecting technique are increasing in biology and chemical industry. For example, thermal finger print, Micro PCR(Polymer Chain Reaction), TAS and so on. To satisfy these needs, we developed a DTSA(Diode Temperature Sensor Array) for detecting and controlling the temperature on small surface. The DTSA is fabricated by using VLSI technique. It consists of 32 array of diodes(1,024 diodes) for temperature detection and 8 heaters for temperature control on a 8mm surface area. The working principle of temperature detection is that the forward voltage drop across a silicon diode is approximately proportional to the inverse of the absolute temperature of diode. And eight heaters (1K) made of poly-silicon are added onto a silicon wafer and controlled individually to maintain a uniform temperature distribution across the DTSA. Flip chip packaging used for easy connection of the DTSA. The circuitry for scanning and controlling DTSA are also developed

  6. Digital VLSI systems design a design manual for implementation of projects on FPGAs and ASICs using Verilog

    CERN Document Server

    Ramachandran, S

    2007-01-01

    Digital VLSI Systems Design is written for an advanced level course using Verilog and is meant for undergraduates, graduates and research scholars of Electrical, Electronics, Embedded Systems, Computer Engineering and interdisciplinary departments such as Bio Medical, Mechanical, Information Technology, Physics, etc. It serves as a reference design manual for practicing engineers and researchers as well. Diligent freelance readers and consultants may also start using this book with ease. The book presents new material and theory as well as synthesis of recent work with complete Project Designs

  7. State-of-the-art assessment of testing and testability of custom LSI/VLSI circuits. Volume 8: Fault simulation

    Science.gov (United States)

    Breuer, M. A.; Carlan, A. J.

    1982-10-01

    Fault simulation is widely used by industry in such applications as scoring the fault coverage of test sequences and construction of fault dictionaries. For use in testing VLSI circuits a simulator is evaluated by its accuracy, i.e., modelling capability. To be accurate simulators must employ multi-valued logic in order to represent unknown signal values, impedance, signal transitions, etc., circuit delays such as transport rise/fall, inertial, and the fault modes it is capable of handling. Of the three basic fault simulators now in use (parallel, deductive and concurrent) concurrent fault simulation appears most promising.

  8. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    Science.gov (United States)

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  9. Proteolytic processing of the cilium adhesin MHJ_0194 (P123J ) in Mycoplasma hyopneumoniae generates a functionally diverse array of cleavage fragments that bind multiple host molecules.

    Science.gov (United States)

    Raymond, Benjamin B A; Jenkins, Cheryl; Seymour, Lisa M; Tacchi, Jessica L; Widjaja, Michael; Jarocki, Veronica M; Deutscher, Ania T; Turnbull, Lynne; Whitchurch, Cynthia B; Padula, Matthew P; Djordjevic, Steven P

    2015-03-01

    Mycoplasma hyopneumoniae, the aetiological agent of porcine enzootic pneumonia, regulates the presentation of proteins on its cell surface via endoproteolysis, including those of the cilial adhesin P123 (MHJ_0194). These proteolytic cleavage events create functional adhesins that bind to proteoglycans and glycoproteins on the surface of ciliated and non-ciliated epithelial cells and to the circulatory host molecule plasminogen. Two dominant cleavage events of the P123 preprotein have been previously characterized; however, immunoblotting studies suggest that more complex processing events occur. These extensive processing events are characterized here. The functional significance of the P97 cleavage fragments is also poorly understood. Affinity chromatography using heparin, fibronectin and plasminogen as bait and peptide arrays were used to expand our knowledge of the adhesive capabilities of P123 cleavage fragments and characterize a novel binding motif in the C-terminus of P123. Further, we use immunohistochemistry to examine in vivo, the biological significance of interactions between M. hyopneumoniae and fibronectin and show that M. hyopneumoniae induces fibronectin deposition at the site of infection on the ciliated epithelium. Our data supports the hypothesis that M. hyopneumoniae possesses the molecular machinery to influence key molecular communication pathways in host cells. © 2014 John Wiley & Sons Ltd.

  10. Image processing with cellular nonlinear networks implemented on field-programmable gate arrays for real-time applications in nuclear fusion

    International Nuclear Information System (INIS)

    Palazzo, S.; Vagliasindi, G.; Arena, P.; Murari, A.; Mazon, D.; De Maack, A.

    2010-01-01

    In the past years cameras have become increasingly common tools in scientific applications. They are now quite systematically used in magnetic confinement fusion, to the point that infrared imaging is starting to be used systematically for real-time machine protection in major devices. However, in order to guarantee that the control system can always react rapidly in case of critical situations, the time required for the processing of the images must be as predictable as possible. The approach described in this paper combines the new computational paradigm of cellular nonlinear networks (CNNs) with field-programmable gate arrays and has been tested in an application for the detection of hot spots on the plasma facing components in JET. The developed system is able to perform real-time hot spot recognition, by processing the image stream captured by JET wide angle infrared camera, with the guarantee that computational time is constant and deterministic. The statistical results obtained from a quite extensive set of examples show that this solution approximates very well an ad hoc serial software algorithm, with no false or missed alarms and an almost perfect overlapping of alarm intervals. The computational time can be reduced to a millisecond time scale for 8 bit 496x560-sized images. Moreover, in our implementation, the computational time, besides being deterministic, is practically independent of the number of iterations performed by the CNN - unlike software CNN implementations.

  11. Microneedle array electrode for human EEG recording.

    NARCIS (Netherlands)

    Lüttge, Regina; van Nieuwkasteele-Bystrova, Svetlana Nikolajevna; van Putten, Michel Johannes Antonius Maria; Vander Sloten, Jos; Verdonck, Pascal; Nyssen, Marc; Haueisen, Jens

    2009-01-01

    Microneedle array electrodes for EEG significantly reduce the mounting time, particularly by circumvention of the need for skin preparation by scrubbing. We designed a new replication process for numerous types of microneedle arrays. Here, polymer microneedle array electrodes with 64 microneedles,

  12. Seismicity And Accretion Processes Along The Mid-Atlantic Ridge south of the Azores using data from the MARCHE Autonomous Hydrophone Array

    Science.gov (United States)

    Perrot, Julie; Cevatoglu, Melis; Cannat, Mathilde; Escartin, Javier; Maia, Marcia; Tisseau, Chantal; Dziak, Robert; Goslin, Jean

    2013-04-01

    The seismicity of the South Atlantic Ocean has been recorded by the MARCHE network of 4 autonomous underwater hydrophones (AUH) moored within the SOFAR channel on the flanks of the Mid-Atlantic Ridge (MAR). The instruments were deployed south of the Azores Plateau between 32° and 39°N from July 2005 to August 2008. The low attenuation properties of the SOFAR channel for earthquake T-wave propagation result in a detection threshold reduction from a magnitude completeness level (Mc) of ~4.3 for MAR events recorded by the land-based seismic networks to Mc=2.1 using this hydrophone array. A spatio-temporal analysis has been performed among the 5600 events recorded inside the MARCHE array. Most events are distributed along the ridge between lat. 39°N on the Azores Platform and the Rainbow (36°N) segment. In the hydrophone catalogue, acoustic magnitude (Source Level, SL) is used as a measure of earthquake size. The source level above which the data set is complete is SLc=205 dB. We look for seismic swarms using the cluster software of the SEISAN package. The criterion used are a minimum SL of 210 to detect a possible mainshock, and a radius of 30 km and a time window of 40 days after this mainshock (Cevatoglu, 2010, Goslin et al., 2012). 7 swarms with more than 15 events are identified using this approach between 32°et 39°N of latitude. The maximum number of earthquake in a swarm is 57 events. This result differs from the study of Simao et al. (2010) as we processed a further year of data and selected sequences with fewer events. Looking at the distribution of the SL as a function of time after the mainshock, we discuss the possible mechanism of these earthquakes : tectonic events with a "mainshock-aftershock" distribution fitting a modified Omori law or volcanic events showing more constant SL values. We also present the geophysical setting of these 7 swarms, using gravity, bathymetry, and available local geological data. This study illustrates the potential of

  13. Array processor architecture

    Science.gov (United States)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  14. Chunking of Large Multidimensional Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  15. An Asynchronous Low Power and High Performance VLSI Architecture for Viterbi Decoder Implemented with Quasi Delay Insensitive Templates

    Directory of Open Access Journals (Sweden)

    T. Kalavathi Devi

    2015-01-01

    Full Text Available Convolutional codes are comprehensively used as Forward Error Correction (FEC codes in digital communication systems. For decoding of convolutional codes at the receiver end, Viterbi decoder is often used to have high priority. This decoder meets the demand of high speed and low power. At present, the design of a competent system in Very Large Scale Integration (VLSI technology requires these VLSI parameters to be finely defined. The proposed asynchronous method focuses on reducing the power consumption of Viterbi decoder for various constraint lengths using asynchronous modules. The asynchronous designs are based on commonly used Quasi Delay Insensitive (QDI templates, namely, Precharge Half Buffer (PCHB and Weak Conditioned Half Buffer (WCHB. The functionality of the proposed asynchronous design is simulated and verified using Tanner Spice (TSPICE in 0.25 µm, 65 nm, and 180 nm technologies of Taiwan Semiconductor Manufacture Company (TSMC. The simulation result illustrates that the asynchronous design techniques have 25.21% of power reduction compared to synchronous design and work at a speed of 475 MHz.

  16. Active terahertz imaging with Ne indicator lamp detector arrays

    Science.gov (United States)

    Kopeika, N. S.; Abramovich, A.; Yadid-Pecht, O.; Yitzhaky, Y.

    2009-08-01

    The advantages of terahertz (THz) imaging are well known. They penetrate well most non-conducting media and there are no known biological hazards, This makes such imaging systems important for homeland security, as they can be used to image concealed objects and often into rooms or buildings from the outside. There are also biomedical applications that are arising. Unfortunately, THz imaging is quite expensive, especially for real time systems, largely because of the price of the detector. Bolometers and pyroelectric detectors can each easily cost at least hundreds of dollars if not more, thus making focal plane arrays of them quite expensive. We have found that common miniature commercial neon indicator lamps costing typically about 30 cents each exhibit high sensitivity to THz radiation [1-3], with microsecond order rise times, thus making them excellent candidates for such focal plane arrays. NEP is on the order of 10-10 W/Hz1/2. Significant improvement of detection performance is expected when heterodyne detection is used Efforts are being made to develop focal plane array imagers using such devices at 300 GHz. Indeed, preliminary images using 4x4 arrays have already been obtained. An 8x8 VLSI board has been developed and is presently being tested. Since no similar imaging systems have been developed previously, there are many new problems to be solved with such a novel and unconventional imaging system. These devices act as square law detectors, with detected signal proportional to THz power. This allows them to act as mixers in heterodyne detection, thus allowing NEP to be reduced further by almost two orders of magnitude. Plans are to expand the arrays to larger sizes, and to employ super resolution techniques to improve image quality beyond that ordinarily obtainable at THz frequencies.

  17. Condensin suppresses recombination and regulates double-strand break processing at the repetitive ribosomal DNA array to ensure proper chromosome segregation during meiosis in budding yeast

    Science.gov (United States)

    Li, Ping; Jin, Hui; Yu, Hong-Guo

    2014-01-01

    During meiosis, homologues are linked by crossover, which is required for bipolar chromosome orientation before chromosome segregation at anaphase I. The repetitive ribosomal DNA (rDNA) array, however, undergoes little or no meiotic recombination. Hyperrecombination can cause chromosome missegregation and rDNA copy number instability. We report here that condensin, a conserved protein complex required for chromosome organization, regulates double-strand break (DSB) formation and repair at the rDNA gene cluster during meiosis in budding yeast. Condensin is highly enriched at the rDNA region during prophase I, released at the prophase I/metaphase I transition, and reassociates with rDNA before anaphase I onset. We show that condensin plays a dual role in maintaining rDNA stability: it suppresses the formation of Spo11-mediated rDNA breaks, and it promotes DSB processing to ensure proper chromosome segregation. Condensin is unnecessary for the export of rDNA breaks outside the nucleolus but required for timely repair of meiotic DSBs. Our work reveals that condensin coordinates meiotic recombination with chromosome segregation at the repetitive rDNA sequence, thereby maintaining genome integrity. PMID:25103240

  18. Evaluation of Drying Process on the Composition of Black Pepper Ethanolic Extract by High Performance Liquid Chromatography With Diode Array Detector

    Science.gov (United States)

    Namjoyan, Foroogh; Hejazi, Hoda; Ramezani, Zahra

    2012-01-01

    Background Black pepper (Piper nigrum) is one of the well-known spices extensively used worldwide especially in India, and Southeast Asia. The presence of alkaloids in the pepper, namely, piperine and its three stereoisomers, isopiperine, chavicine and isochavicine are well noticed. Objectives The current study evaluated the effect of lyophilization and oven drying on the stability and decomposition of constituents of black pepper ethanolic extract. Materials and Methods In the current study ethanolic extract of black pepper obtained by maceration method was dried using two methods. The effect of freeze and oven drying on the chemical composition of the extract especially piperine and its three isomers were evaluated by HPLC analysis of the ethanolic extract before and after drying processes using diode array detector. The UV Vis spectra of the peaks at piperine retention time before and after each drying methods indicated maximum absorbance at 341.2 nm corresponding to standard piperine. Results The results indicated a decrease in intensity of the chromatogram peaks at approximately all retention times after freeze drying, indicating a few percent loss of piperine and its isomers upon lyophilization. Two impurity peaks were completely removed from the extract. Conclusions In oven dried samples two of the piperine stereoisomers were completely removed from the extract and the intensity of piperine peak was increased. PMID:24624176

  19. Seismometer array station processors

    International Nuclear Information System (INIS)

    Key, F.A.; Lea, T.G.; Douglas, A.

    1977-01-01

    A description is given of the design, construction and initial testing of two types of Seismometer Array Station Processor (SASP), one to work with data stored on magnetic tape in analogue form, the other with data in digital form. The purpose of a SASP is to detect the short period P waves recorded by a UK-type array of 20 seismometers and to edit these on to a a digital library tape or disc. The edited data are then processed to obtain a rough location for the source and to produce seismograms (after optimum processing) for analysis by a seismologist. SASPs are an important component in the scheme for monitoring underground explosions advocated by the UK in the Conference of the Committee on Disarmament. With digital input a SASP can operate at 30 times real time using a linear detection process and at 20 times real time using the log detector of Weichert. Although the log detector is slower, it has the advantage over the linear detector that signals with lower signal-to-noise ratio can be detected and spurious large amplitudes are less likely to produce a detection. It is recommended, therefore, that where possible array data should be recorded in digital form for input to a SASP and that the log detector of Weichert be used. Trial runs show that a SASP is capable of detecting signals down to signal-to-noise ratios of about two with very few false detections, and at mid-continental array sites it should be capable of detecting most, if not all, the signals with magnitude above msub(b) 4.5; the UK argues that, given a suitable network, it is realistic to hope that sources of this magnitude and above can be detected and identified by seismological means alone. (author)

  20. The Owens Valley Millimeter Array

    International Nuclear Information System (INIS)

    Padin, S.; Scott, S.L.; Woody, D.P.; Scoville, N.Z.; Seling, T.V.

    1991-01-01

    The telescopes and signal processing systems of the Owens Valley Millimeter Array are considered, and improvements in the sensitivity and stability of the instrument are characterized. The instrument can be applied to map sources in the 85 to 115 GHz and 218 to 265 GHz bands with a resolution of about 1 arcsec in the higher frequency band. The operation of the array is fully automated. The current scientific programs for the array encompass high-resolution imaging of protoplanetary/protostellar disk structures, observations of molecular cloud complexes associated with spiral structure in nearby galaxies, and observations of molecular structures in the nuclei of spiral and luminous IRAS galaxies. 9 refs

  1. Processes for design, construction and utilisation of arrays of light-emitting diodes and light-emitting diode-coupled optical fibres for multi-site brain light delivery.

    Science.gov (United States)

    Bernstein, Jacob G; Allen, Brian D; Guerra, Alexander A; Boyden, Edward S

    2015-05-01

    Optogenetics enables light to be used to control the activity of genetically targeted cells in the living brain. Optical fibers can be used to deliver light to deep targets, and LEDs can be spatially arranged to enable patterned light delivery. In combination, arrays of LED-coupled optical fibers can enable patterned light delivery to deep targets in the brain. Here we describe the process flow for making LED arrays and LED-coupled optical fiber arrays, explaining key optical, electrical, thermal, and mechanical design principles to enable the manufacturing, assembly, and testing of such multi-site targetable optical devices. We also explore accessory strategies such as surgical automation approaches as well as innovations to enable low-noise concurrent electrophysiology.

  2. Building the Support for Radar Processing across Memory Hierarchies: On the Development of an Array Class with Shapes using Expression Templates in C++̂

    National Research Council Canada - National Science Library

    Mullin, Lenore

    2004-01-01

    ...), could be used to develop software for radar and other DSP applications. This software needs to be tuned to use the levels of memory hierarchies efficiently without the materialization of array valued temporaries 3...

  3. CCD and IR array controllers

    Science.gov (United States)

    Leach, Robert W.; Low, Frank J.

    2000-08-01

    A family of controllers has bene developed that is powerful and flexible enough to operate a wide range of CCD and IR focal plane arrays in a variety of ground-based applications. These include fast readout of small CCD and IR arrays for adaptive optics applications, slow readout of large CCD and IR mosaics, and single CCD and IR array operation at low background/low noise regimes as well as high background/high speed regimes. The CCD and IR controllers have a common digital core based on user- programmable digital signal processors that are used to generate the array clocking and signal processing signals customized for each application. A fiber optic link passes image data and commands to VME or PCI interface boards resident in a host computer to the controller. CCD signal processing is done with a dual slope integrator operating at speeds of up to one Megapixel per second per channel. Signal processing of IR arrays is done either with a dual channel video processor or a four channel video processor that has built-in image memory and a coadder to 32-bit precision for operating high background arrays. Recent developments underway include the implementation of a fast fiber optic data link operating at a speed of 12.5 Megapixels per second for fast image transfer from the controller to the host computer, and supporting image acquisition software and device drivers for the PCI interface board for the Sun Solaris, Linux and Windows 2000 operating systems.

  4. Coupling in reflector arrays

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1968-01-01

    In order to reduce the space occupied by a reflector array, it is desirable to arrange the array antennas as close to each other as possible; however, in this case coupling between the array antennas will reduce the reflecting properties of the reflector array. The purpose of the present communic......In order to reduce the space occupied by a reflector array, it is desirable to arrange the array antennas as close to each other as possible; however, in this case coupling between the array antennas will reduce the reflecting properties of the reflector array. The purpose of the present...

  5. Acoustic array systems theory, implementation, and application

    CERN Document Server

    Bai, Mingsian R; Benesty, Jacob

    2013-01-01

    Presents a unified framework of far-field and near-field array techniques for noise source identification and sound field visualization, from theory to application. Acoustic Array Systems: Theory, Implementation, and Application provides an overview of microphone array technology with applications in noise source identification and sound field visualization. In the comprehensive treatment of microphone arrays, the topics covered include an introduction to the theory, far-field and near-field array signal processing algorithms, practical implementations, and common applic

  6. Electrochemically Obtained TiO2/CuxOy Nanotube Arrays Presenting a Photocatalytic Response in Processes of Pollutants Degradation and Bacteria Inactivation in Aqueous Phase

    Directory of Open Access Journals (Sweden)

    Magda Kozak

    2018-06-01

    Full Text Available TiO2/CuxOy nanotube (NT arrays were synthesized using the anodization method in the presence of ethylene glycol and different parameters applied. The presence, morphology, and chemical character of the obtained structures was characterized using a variety of methods—SEM (scanning electron microscopy, XPS (X-ray photoelectron spectroscopy, XRD (X-ray crystallography, PL (photoluminescence, and EDX (energy-dispersive X-ray spectroscopy. A p-n mixed oxide heterojunction of Ti-Cu was created with a proved response to the visible light range and the stable form that were in contact with Ti. TiO2/CuxOy NTs presented the appearance of both Cu2O (mainly and CuO components influencing the dimensions of the NTs (1.1–1.3 µm. Additionally, changes in voltage have been proven to affect the NTs’ length, which reached a value of 3.5 µm for Ti90Cu10_50V. Degradation of phenol in the aqueous phase was observed in 16% of Ti85Cu15_30V after 1 h of visible light irradiation (λ > 420 nm. Scavenger tests for phenol degradation process in presence of NT samples exposed the responsibility of superoxide radicals for degradation of organic compounds in Vis light region. Inactivation of bacteria strains Escherichia coli (E. coli, Bacillus subtilis (B. subtilis, and Clostridium sp. in presence of obtained TiO2/CuxOy NT photocatalysts, and Vis light has been studied showing a great improvement in inactivation efficiency with a response rate of 97% inactivation for E. coli and 98% for Clostridium sp. in 60 min. Evidently, TEM (transmission electron microscopy images confirmed the bacteria cells’ damage.

  7. 10 K gate I(2)L and 1 K component analog compatible bipolar VLSI technology - HIT-2

    Science.gov (United States)

    Washio, K.; Watanabe, T.; Okabe, T.; Horie, N.

    1985-02-01

    An advanced analog/digital bipolar VLSI technology that combines on the same chip 2-ns 10 K I(2)L gates with 1 K analog devices is proposed. The new technology, called high-density integration technology-2, is based on a new structure concept that consists of three major techniques: shallow grooved-isolation, I(2)L active layer etching, and I(2)L current gain increase. I(2)L circuits with 80-MHz maximum toggle frequency have developed compatibly with n-p-n transistors having a BV(CE0) of more than 10 V and an f(T) of 5 GHz, and lateral p-n-p transistors having an f(T) of 150 MHz.

  8. ESPRIT And Uniform Linear Arrays

    Science.gov (United States)

    Roy, R. H.; Goldburg, M.; Ottersten, B. E.; Swindlehurst, A. L.; Viberg, M.; Kailath, T.

    1989-11-01

    Abstract ¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.

  9. Systematic genetic array analysis links the Saccharomyces cerevisiae SAGA/SLIK and NuA4 component Tra1 to multiple cellular processes

    Directory of Open Access Journals (Sweden)

    Andrews Brenda

    2008-07-01

    Full Text Available Abstract Background Tra1 is an essential 437-kDa component of the Saccharomyces cerevisiae SAGA/SLIK and NuA4 histone acetyltransferase complexes. It is a member of a group of key signaling molecules that share a carboxyl-terminal domain related to phosphatidylinositol-3-kinase but unlike many family members, it lacks kinase activity. To identify genetic interactions for TRA1 and provide insight into its function we have performed a systematic genetic array analysis (SGA on tra1SRR3413, an allele that is defective in transcriptional regulation. Results The SGA analysis revealed 114 synthetic slow growth/lethal (SSL interactions for tra1SRR3413. The interacting genes are involved in a range of cellular processes including gene expression, mitochondrial function, and membrane sorting/protein trafficking. In addition many of the genes have roles in the cellular response to stress. A hierarchal cluster analysis revealed that the pattern of SSL interactions for tra1SRR3413 most closely resembles deletions of a group of regulatory GTPases required for membrane sorting/protein trafficking. Consistent with a role for Tra1 in cellular stress, the tra1SRR3413 strain was sensitive to rapamycin. In addition, calcofluor white sensitivity of the strain was enhanced by the protein kinase inhibitor staurosporine, a phenotype shared with the Ada components of the SAGA/SLIK complex. Through analysis of a GFP-Tra1 fusion we show that Tra1 is principally localized to the nucleus. Conclusion We have demonstrated a genetic association of Tra1 with nuclear, mitochondrial and membrane processes. The identity of the SSL genes also connects Tra1 with cellular stress, a result confirmed by the sensitivity of the tra1SRR3413 strain to a variety of stress conditions. Based upon the nuclear localization of GFP-Tra1 and the finding that deletion of the Ada components of the SAGA complex result in similar phenotypes as tra1SRR3413, we suggest that the effects of tra1SRR3413

  10. A Fusion Approach to Feature Extraction by Wavelet Decomposition and Principal Component Analysis in Transient Signal Processing of SAW Odor Sensor Array

    Directory of Open Access Journals (Sweden)

    Prashant SINGH

    2011-03-01

    Full Text Available This paper presents theoretical analysis of a new approach for development of surface acoustic wave (SAW sensor array based odor recognition system. The construction of sensor array employs a single polymer interface for selective sorption of odorant chemicals in vapor phase. The individual sensors are however coated with different thicknesses. The idea of sensor coating thickness variation is for terminating solvation and diffusion kinetics of vapors into polymer up to different stages of equilibration on different sensors. This is expected to generate diversity in information content of the sensors transient. The analysis is based on wavelet decomposition of transient signals. The single sensor transients have been used earlier for generating odor identity signatures based on wavelet approximation coefficients. In the present work, however, we exploit variability in diffusion kinetics due to polymer thicknesses for making odor signatures. This is done by fusion of the wavelet coefficients from different sensors in the array, and then applying the principal component analysis. We find that the present approach substantially enhances the vapor class separability in feature space. The validation is done by generating synthetic sensor array data based on well-established SAW sensor theory.

  11. Solution-processed all-oxide bulk heterojunction solar cells based on CuO nanaorod array and TiO2 nanocrystals

    Science.gov (United States)

    Wu, Fan; Qiao, Qiquan; Bahrami, Behzad; Chen, Ke; Pathak, Rajesh; Tong, Yanhua; Li, Xiaoyi; Zhang, Tiansheng; Jian, Ronghua

    2018-05-01

    We present a method to synthesize CuO nanorod array/TiO2 nanocrystals bulk heterojunction (BHJ) on fluorine-tin-oxide (FTO) glass, in which single-crystalline p-type semiconductor of the CuO nanorod array is grown on the FTO glass by hydrothermal reaction and the n-type semiconductor of the TiO2 precursor is filled into the CuO nanorods to form well-organized nano-interpenetrating BHJ after air annealing. The interface charge transfer in CuO nanorod array/TiO2 heterojunction is studied by Kelvin probe force microscopy (KPFM). KPFM results demonstrate that the CuO nanorod array/TiO2 heterojunction can realize the transfer of photo-generated electrons from the CuO nanorod array to TiO2. In this work, a solar cell with the structure FTO/CuO nanoarray/TiO2/Al is successfully fabricated, which exhibits an open-circuit voltage (V oc) of 0.20 V and short-circuit current density (J sc) of 0.026 mA cm‑2 under AM 1.5 illumination. KPFM studies indicate that the very low performance is caused by an undesirable interface charge transfer. The interfacial surface potential (SP) shows that the electron concentration in the CuO nanorod array changes considerably after illumination due to increased photo-generated electrons, but the change in the electron concentration in TiO2 is much less than in CuO, which indicates that the injection efficiency of the photo-generated electrons from CuO to TiO2 is not satisfactory, resulting in an undesirable J sc in the solar cell. The interface photovoltage from the KPFM measurement shows that the low V oc results from the small interfacial SP difference between CuO and TiO2 because the low injected electron concentration cannot raise the Fermi level significantly in TiO2. This conclusion agrees with the measured work function results under illumination. Hence, improvement of the interfacial electron injection is primary for the CuO nanorod array/TiO2 heterojunction solar cells.

  12. Impact of the limitations of state-of-the-art micro-fabrication processes on the performance of pillar array columns for liquid chromatography.

    Science.gov (United States)

    Op de Beeck, Jeff; De Malsche, Wim; Tezcan, Deniz S; De Moor, Piet; Desmet, Gert

    2012-05-25

    We report on the practical limitations of the current state-of-the-art in micro-fabrication technology to produce the small pillar sizes that are needed to obtain high efficiency pillar array columns. For this purpose, nine channels with a different pillar diameter, ranging from 5 to 0.5 μm were fabricated using state-of the-art deep-UV lithography and deep reactive ion etching (DRIE) etching technology. The obtained results strongly deviated from the theoretically expected trend, wherein the minimal plate height (H(min)) would reduce linearly with the pillar diameter. The minimal plate height decreases from 1.7 to 1.2 μm when going from 4.80 to 3.81 μm diameter pillars, but as the dimensions are further reduced, the minimal plate heights rise again to values around 2 μm. The smallest pillar diameter even produced the worst minimal plate height (4 μm). An in-depth scanning electron microscopy (SEM) inspection of the different channels clearly reveals that these findings can be attributed to the micro-fabrication limitations that are inevitably encountered when exploring the limits of deep-UV lithography and DRIE etching processes. When the target dimensions of the design approach the etching resolution limits, the band broadening increases in a strongly non-linear way with the decreased pillar dimensions. This highly non-linear relationship can be understood from first principles: when the machining error is of the order of 100-200 nm and when the target design size for the inter-pillar distance is of the order of 250 nm, this inevitably leads to pores that will range in size between 50 and 450 nm that we want to highlight with our paper highly non-linear relationship. This highly non-linear relationship can be understood from first principles: when the machining error is of the order of 100-200 nm and when the target design size for the inter-pillar distance is of the order of 250 nm, this inevitably leads to pores that will range in size between 50 and 450

  13. Flexible eddy current coil arrays

    International Nuclear Information System (INIS)

    Krampfner, Y.; Johnson, D.P.

    1987-01-01

    A novel approach was devised to overcome certain limitations of conventional eddy current testing. The typical single-element hand-wound probe was replaced with a two dimensional array of spirally wound probe elements deposited on a thin, flexible polyimide substrate. This provides full and reliable coverage of the test area and eliminates the need for scanning. The flexible substrate construction of the array allows the probes to conform to irregular part geometries, such as turbine blades and tubing, thereby eliminating the need for specialized probes for each geometry. Additionally, the batch manufacturing process of the array can yield highly uniform and reproducible coil geometries. The array is driven by a portable computer-based eddy current instrument, smartEDDY/sup TM/, capable of two-frequency operation, and offers a great deal of versatility and flexibility due to its software-based architecture. The array is coupled to the instrument via an 80-switch multiplexer that can be configured to address up to 1600 probes. The individual array elements may be addressed in any desired sequence, as defined by the software

  14. Big Data Challenges for Large Radio Arrays

    Science.gov (United States)

    Jones, Dayton L.; Wagstaff, Kiri; Thompson, David; D'Addario, Larry; Navarro, Robert; Mattmann, Chris; Majid, Walid; Lazio, Joseph; Preston, Robert; Rebbapragada, Umaa

    2012-01-01

    Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields.

  15. A silicon pixel detector with routing for external VLSI read-out

    International Nuclear Information System (INIS)

    Thomas, S.L.; Seller, P.

    1988-07-01

    A silicon pixel detector with an array of 32 by 16 hexagonal pixels has been designed and is being built on high resistivity silicon. The detector elements are reverse biased diodes consisting of p-implants in an n-type substrate and are fully depleted from the front to the back of the wafer. They are intended to measure high energy ionising particles traversing the detector. The detailed design of the pixels, their layout and method of read-out are discussed. A number of test structures have been incorporated onto the wafer to enable measurements to be made on individual pixels together with a variety of active devices. The results will give a better understanding of the operation of the pixel array, and will allow testing of computer simulations of more elaborate structures for the future. (author)

  16. Diagnosable structured logic array

    Science.gov (United States)

    Whitaker, Sterling (Inventor); Miles, Lowell (Inventor); Gambles, Jody (Inventor); Maki, Gary K. (Inventor)

    2009-01-01

    A diagnosable structured logic array and associated process is provided. A base cell structure is provided comprising a logic unit comprising a plurality of input nodes, a plurality of selection nodes, and an output node, a plurality of switches coupled to the selection nodes, where the switches comprises a plurality of input lines, a selection line and an output line, a memory cell coupled to the output node, and a test address bus and a program control bus coupled to the plurality of input lines and the selection line of the plurality of switches. A state on each of the plurality of input nodes is verifiably loaded and read from the memory cell. A trusted memory block is provided. The associated process is provided for testing and verifying a plurality of truth table inputs of the logic unit.

  17. Low Frequency Space Array

    International Nuclear Information System (INIS)

    Dennison, B.; Weiler, K.W.; Johnston, K.J.

    1987-01-01

    The Low Frequency Space Array (LFSA) is a conceptual mission to survey the entire sky and to image individual sources at frequencies between 1.5 and 26 MHz, a frequency range over which the earth's ionosphere transmits poorly or not at all. With high resolution, high sensitivity observations, a new window will be opened in the electromagnetic spectrum for astronomical investigation. Also, extending observations down to such low frequencies will bring astronomy to the fundamental limit below which the galaxy becomes optically thick due to free-free absorption. A number of major scientific goals can be pursued with such a mission, including mapping galactic emission and absorption, studies of individual source spectra in a frequency range where a number of important processes may play a role, high resolution imaging of extended sources, localization of the impulsive emission from Jupiter, and a search for coherent emission processes. 19 references

  18. A polychromator-type near-infrared spectrometer with a high-sensitivity and high-resolution photodiode array detector for pharmaceutical process monitoring on the millisecond time scale.

    Science.gov (United States)

    Murayama, Kodai; Genkawa, Takuma; Ishikawa, Daitaro; Komiyama, Makoto; Ozaki, Yukihiro

    2013-02-01

    In the fine chemicals industry, particularly in the pharmaceutical industry, advanced sensing technologies have recently begun being incorporated into the process line in order to improve safety and quality in accordance with process analytical technology. For estimating the quality of powders without preparation during drug formulation, near-infrared (NIR) spectroscopy has been considered the most promising sensing approach. In this study, we have developed a compact polychromator-type NIR spectrometer equipped with a photodiode (PD) array detector. This detector is consisting of 640 InGaAs-PD elements with 20-μm pitch. Some high-specification spectrometers, which use InGaAs-PD with 512 elements, have a wavelength resolution of about 1.56 nm when covering 900-1700 nm range. On the other hand, the newly developed detector, having the PD with one of the world's highest density, enables wavelength resolution of below 1.25 nm. Moreover, thanks to the combination with a highly integrated charge amplifier array circuit, measurement speed of the detector is higher by two orders than that of existing PD array detectors. The developed spectrometer is small (120 mm × 220 mm × 200 mm) and light (6 kg), and it contains various key devices including the high-density and high-sensitivity PD array detector, NIR technology, and spectroscopy technology for a spectroscopic analyzer that has the required detection mechanism and high sensitivity for powder measurement, as well as a high-speed measuring function for blenders. Moreover, we have evaluated the characteristics of the developed NIR spectrometer, and the measurement of powder samples confirmed that it has high functionality.

  19. A polychromator-type near-infrared spectrometer with a high-sensitivity and high-resolution photodiode array detector for pharmaceutical process monitoring on the millisecond time scale

    Science.gov (United States)

    Murayama, Kodai; Genkawa, Takuma; Ishikawa, Daitaro; Komiyama, Makoto; Ozaki, Yukihiro

    2013-02-01

    In the fine chemicals industry, particularly in the pharmaceutical industry, advanced sensing technologies have recently begun being incorporated into the process line in order to improve safety and quality in accordance with process analytical technology. For estimating the quality of powders without preparation during drug formulation, near-infrared (NIR) spectroscopy has been considered the most promising sensing approach. In this study, we have developed a compact polychromator-type NIR spectrometer equipped with a photodiode (PD) array detector. This detector is consisting of 640 InGaAs-PD elements with 20-μm pitch. Some high-specification spectrometers, which use InGaAs-PD with 512 elements, have a wavelength resolution of about 1.56 nm when covering 900-1700 nm range. On the other hand, the newly developed detector, having the PD with one of the world's highest density, enables wavelength resolution of below 1.25 nm. Moreover, thanks to the combination with a highly integrated charge amplifier array circuit, measurement speed of the detector is higher by two orders than that of existing PD array detectors. The developed spectrometer is small (120 mm × 220 mm × 200 mm) and light (6 kg), and it contains various key devices including the high-density and high-sensitivity PD array detector, NIR technology, and spectroscopy technology for a spectroscopic analyzer that has the required detection mechanism and high sensitivity for powder measurement, as well as a high-speed measuring function for blenders. Moreover, we have evaluated the characteristics of the developed NIR spectrometer, and the measurement of powder samples confirmed that it has high functionality.

  20. Fiber Laser Array

    National Research Council Canada - National Science Library

    Simpson, Thomas

    2002-01-01

    ...., field-dependent, loss within the coupled laser array. During this program, Jaycor focused on the construction and use of an experimental apparatus that can be used to investigate the coherent combination of an array of fiber lasers...

  1. Micromolding for ceramic microneedle arrays

    NARCIS (Netherlands)

    van Nieuwkasteele-Bystrova, Svetlana Nikolajevna; Lüttge, Regina

    2011-01-01

    The fabrication process of ceramic microneedle arrays (MNAs) is presented. This includes the manufacturing of an SU-8/Si-master, its double replication resulting in a PDMS mold for production by micromolding and ceramic sintering. The robustness of the replicated structures was tested by means of

  2. In situ synthesis of protein arrays.

    Science.gov (United States)

    He, Mingyue; Stoevesandt, Oda; Taussig, Michael J

    2008-02-01

    In situ or on-chip protein array methods use cell free expression systems to produce proteins directly onto an immobilising surface from co-distributed or pre-arrayed DNA or RNA, enabling protein arrays to be created on demand. These methods address three issues in protein array technology: (i) efficient protein expression and availability, (ii) functional protein immobilisation and purification in a single step and (iii) protein on-chip stability over time. By simultaneously expressing and immobilising many proteins in parallel on the chip surface, the laborious and often costly processes of DNA cloning, expression and separate protein purification are avoided. Recently employed methods reviewed are PISA (protein in situ array) and NAPPA (nucleic acid programmable protein array) from DNA and puromycin-mediated immobilisation from mRNA.

  3. Dumand-array data-acquisition system

    International Nuclear Information System (INIS)

    Brenner, A.E.; Theriot, D.; Dau, W.D.; Geelhood, B.D.; Harris, F.; Learned, J.G.; Stenger, V.; March, R.; Roos, C.; Shumard, E.

    1982-04-01

    An overall data acquisition approach for DUMAND is described. The scheme assumes one array to shore optical fiber transmission line for each string of the array. The basic event sampling period is approx. 13 μsec. All potentially interesting data is transmitted to shore where the major processing is performed

  4. Micromachined VLSI 3D electronics. Final report for period September 1, 2000 - March 31, 2001

    Energy Technology Data Exchange (ETDEWEB)

    Beetz, C.P.; Steinbeck, J.; Hsueh, K.L.

    2001-03-31

    The phase I program investigated the construction of electronic interconnections through the thickness of a silicon wafer. The novel aspects of the technology are that the length-to-width ratio of the channels is as high as 100:1, so that the minimum amount of real estate is used for contact area. Constructing a large array of these through-wafer interconnections will enable two circuit die to be coupled on opposite sides of a silicon circuit board providing high speed connection between the two.

  5. Low power digital signal processing

    DEFF Research Database (Denmark)

    Paker, Ozgun

    2003-01-01

    hardwired ASICs and more than 6 21 times lower than current state of the art low-power DSP processors. An orthogonal but practical contribution of this thesis is the test bench implementation. A PCI-based FPGA board has been used to equip a standard desktop PC with tester facilities. The test bench proved...... to be a viable alternative to conventional expensive test equipment. Finally, the work presented in this thesis has been published at several IEEE workshops and conferences, and in the Journal of VLSI Signal Processing....

  6. Hardware Genetic Algorithm Optimization by Critical Path Analysis using a Custom VLSI Architecture

    Directory of Open Access Journals (Sweden)

    Farouk Smith

    2015-07-01

    Full Text Available This paper propose a Virtual-Field Programmable Gate Array (V-FPGA architecture that allows direct access to its configuration bits to facilitate hardware evolution, thereby allowing any combinational or sequential digital circuit to be realized. By using the V-FPGA, this paper investigates two possible ways of making evolutionary hardware systems more scalable: by optimizing the system’s genetic algorithm (GA; and by decomposing the solution circuit into smaller, evolvable sub-circuits. GA optimization is done by: omitting a canonical GA’s crossover operator (i.e. by using a 1+λ algorithm; applying evolution constraints; and optimizing the fitness function. A noteworthy contribution this research has made is the in-depth analysis of the phenotypes’ CPs. Through analyzing the CPs, it has been shown that a great amount of insight can be gained into a phenotype’s fitness. We found that as the number of columns in the Cartesian Genetic Programming array increases, so the likelihood of an external output being placed in the column decreases. Furthermore, the number of used LEs per column also substantially decreases per added column. Finally, we demonstrated the evolution of a state-decomposed control circuit. It was shown that the evolution of each state’s sub-circuit was possible, and suggest that modular evolution can be a successful tool when dealing with scalability.

  7. Area array interconnection handbook

    CERN Document Server

    Totta, Paul A

    2012-01-01

    Microelectronic packaging has been recognized as an important "enabler" for the solid­ state revolution in electronics which we have witnessed in the last third of the twentieth century. Packaging has provided the necessary external wiring and interconnection capability for transistors and integrated circuits while they have gone through their own spectacular revolution from discrete device to gigascale integration. At IBM we are proud to have created the initial, simple concept of flip chip with solder bump connections at a time when a better way was needed to boost the reliability and improve the manufacturability of semiconductors. The basic design which was chosen for SLT (Solid Logic Technology) in the 1960s was easily extended to integrated circuits in the '70s and VLSI in the '80s and '90s. Three I/O bumps have grown to 3000 with even more anticipated for the future. The package families have evolved from thick-film (SLT) to thin-film (metallized ceramic) to co-fired multi-layer ceramic. A later famil...

  8. Configuration Considerations for Low Frequency Arrays

    Science.gov (United States)

    Lonsdale, C. J.

    2005-12-01

    The advance of digital signal processing capabilities has spurred a new effort to exploit the lowest radio frequencies observable from the ground, from ˜10 MHz to a few hundred MHz. Multiple scientifically and technically complementary instruments are planned, including the Mileura Widefield Array (MWA) in the 80-300 MHz range, and the Long Wavelength Array (LWA) in the 20-80 MHz range. The latter instrument will target relatively high angular resolution, and baselines up to a few hundred km. An important practical question for the design of such an array is how to distribute the collecting area on the ground. The answer to this question profoundly affects both cost and performance. In this contribution, the factors which determine the anticipated performance of any such array are examined, paying particular attention to the viability and accuracy of array calibration. It is argued that due to the severity of ionospheric effects in particular, it will be difficult or impossible to achieve routine, high dynamic range imaging with a geographically large low frequency array, unless a large number of physically separate array stations is built. This conclusion is general, is based on the need for adequate sampling of ionospheric irregularities, and is independent of the calibration algorithms and techniques that might be employed. It is further argued that array configuration figures of merit that are traditionally used for higher frequency arrays are inappropriate, and a different set of criteria are proposed.

  9. Hybrid Arrays for Chemical Sensing

    Science.gov (United States)

    Kramer, Kirsten E.; Rose-Pehrsson, Susan L.; Johnson, Kevin J.; Minor, Christian P.

    In recent years, multisensory approaches to environment monitoring for chemical detection as well as other forms of situational awareness have become increasingly popular. A hybrid sensor is a multimodal system that incorporates several sensing elements and thus produces data that are multivariate in nature and may be significantly increased in complexity compared to data provided by single-sensor systems. Though a hybrid sensor is itself an array, hybrid sensors are often organized into more complex sensing systems through an assortment of network topologies. Part of the reason for the shift to hybrid sensors is due to advancements in sensor technology and computational power available for processing larger amounts of data. There is also ample evidence to support the claim that a multivariate analytical approach is generally superior to univariate measurements because it provides additional redundant and complementary information (Hall, D. L.; Linas, J., Eds., Handbook of Multisensor Data Fusion, CRC, Boca Raton, FL, 2001). However, the benefits of a multisensory approach are not automatically achieved. Interpretation of data from hybrid arrays of sensors requires the analyst to develop an application-specific methodology to optimally fuse the disparate sources of data generated by the hybrid array into useful information characterizing the sample or environment being observed. Consequently, multivariate data analysis techniques such as those employed in the field of chemometrics have become more important in analyzing sensor array data. Depending on the nature of the acquired data, a number of chemometric algorithms may prove useful in the analysis and interpretation of data from hybrid sensor arrays. It is important to note, however, that the challenges posed by the analysis of hybrid sensor array data are not unique to the field of chemical sensing. Applications in electrical and process engineering, remote sensing, medicine, and of course, artificial

  10. Solar array flight dynamic experiment

    Science.gov (United States)

    Schock, Richard W.

    1987-01-01

    The purpose of the Solar Array Flight Dynamic Experiment (SAFDE) is to demonstrate the feasibility of on-orbit measurement and ground processing of large space structures' dynamic characteristics. Test definition or verification provides the dynamic characteristic accuracy required for control systems use. An illumination/measurement system was developed to fly on space shuttle flight STS-41D. The system was designed to dynamically evaluate a large solar array called the Solar Array Flight Experiment (SAFE) that had been scheduled for this flight. The SAFDE system consisted of a set of laser diode illuminators, retroreflective targets, an intelligent star tracker receiver and the associated equipment to power, condition, and record the results. In six tests on STS-41D, data was successfully acquired from 18 retroreflector targets and ground processed, post flight, to define the solar array's dynamic characteristic. The flight experiment proved the viability of on-orbit test definition of large space structures dynamic characteristics. Future large space structures controllability should be greatly enhanced by this capability.

  11. Analysis of the influence of processing of stir-baking with glycyrrhizae on the main components of Euodiae Fructus by high-performance liquid chromatography with diode array detector.

    Science.gov (United States)

    Hu, Jue; Wu, Xin; Cao, Gang; Chen, Xiaocheng

    2014-01-01

    Euodiae Fructus is one of the most commonly used Chinese herbs in China. Specifically, the crude Euodiae Fructus and its processed products of Gancao Zhi Pin are used clinically for the treatment of different diseases. In order to improve the quality control standard and evaluate the crude and processed Euodiae Fructus, in this study, a simple and sensitive high-performance liquid chromatography-diode array detector method was developed for the simultaneous determination of five major compounds in Euodiae Fructus. The results indicated that the five components had significant linear relation (r(2) ≥ 0.9997) between the peak area and the injected concentration. The average recoveries of the five components were in the range from 97.38% to 102.56%. Overall intra- and inter-day variations were less than 1.36%. The developed method can be applied to the intrinsic quality control of crude and processed Euodiae Fructus.

  12. Carbon nanotube nanoelectrode arrays

    Science.gov (United States)

    Ren, Zhifeng; Lin, Yuehe; Yantasee, Wassana; Liu, Guodong; Lu, Fang; Tu, Yi

    2008-11-18

    The present invention relates to microelectode arrays (MEAs), and more particularly to carbon nanotube nanoelectrode arrays (CNT-NEAs) for chemical and biological sensing, and methods of use. A nanoelectrode array includes a carbon nanotube material comprising an array of substantially linear carbon nanotubes each having a proximal end and a distal end, the proximal end of the carbon nanotubes are attached to a catalyst substrate material so as to form the array with a pre-determined site density, wherein the carbon nanotubes are aligned with respect to one another within the array; an electrically insulating layer on the surface of the carbon nanotube material, whereby the distal end of the carbon nanotubes extend beyond the electrically insulating layer; a second adhesive electrically insulating layer on the surface of the electrically insulating layer, whereby the distal end of the carbon nanotubes extend beyond the second adhesive electrically insulating layer; and a metal wire attached to the catalyst substrate material.

  13. Deformable wire array: fiber drawn tunable metamaterials

    DEFF Research Database (Denmark)

    Fleming, Simon; Stefani, Alessio; Tang, Xiaoli

    2017-01-01

    By fiber drawing we fabricate a wire array metamaterial, the structure of which can be actively modified. The plasma frequency can be tuned by 50% by compressing the metamaterial; recovers when released and the process can be repeated.......By fiber drawing we fabricate a wire array metamaterial, the structure of which can be actively modified. The plasma frequency can be tuned by 50% by compressing the metamaterial; recovers when released and the process can be repeated....

  14. Josephson junction arrays

    International Nuclear Information System (INIS)

    Bindslev Hansen, J.; Lindelof, P.E.

    1985-01-01

    In this review we intend to cover recent work involving arrays of Josephson junctions. The work on such arrays falls naturally into three main areas of interest: 1. Technical applications of Josephson junction arrays for high-frequency devices. 2. Experimental studies of 2-D model systems (Kosterlitz-Thouless phase transition, commensurate-incommensurate transition in frustrated (flux) lattices). 3. Investigations of phenomena associated with non-equilibrium superconductivity in and around Josephson junctions (with high current density). (orig./BUD)

  15. Phased-array radars

    Science.gov (United States)

    Brookner, E.

    1985-02-01

    The operating principles, technology, and applications of phased-array radars are reviewed and illustrated with diagrams and photographs. Consideration is given to the antenna elements, circuitry for time delays, phase shifters, pulse coding and compression, and hybrid radars combining phased arrays with lenses to alter the beam characteristics. The capabilities and typical hardware of phased arrays are shown using the US military systems COBRA DANE and PAVE PAWS as examples.

  16. Storage array reflection considerations

    International Nuclear Information System (INIS)

    Haire, M.J.; Jordan, W.C.; Taylor, R.G.

    1997-01-01

    The assumptions used for reflection conditions of single containers are fairly well established and consistently applied throughout the industry in nuclear criticality safety evaluations. Containers are usually considered to be either fully water reflected (i.e., surrounded by 6 to 12 in. of water) for safety calculations or reflected by 1 in. of water for nominal (structural material and air) conditions. Tables and figures are usually available for performing comparative evaluations of containers under various loading conditions. Reflection considerations used for evaluating the safety of storage arrays of fissile material are not as well established. When evaluating arrays, it has become more common for analysts to use calculations to demonstrate the safety of the array configuration. In performing these calculations, the analyst has considerable freedom concerning the assumptions made for modeling the reflection of the array. Considerations are given for the physical layout of the array with little or no discussion (or demonstration) of what conditions are bounded by the assumed reflection conditions. For example, an array may be generically evaluated by placing it in a corner of a room in which the opposing walls are far away. Typically, it is believed that complete flooding of the room is incredible, so the array is evaluated for various levels of water mist interspersed among array containers. This paper discusses some assumptions that are made regarding storage array reflection

  17. The EUROBALL array

    International Nuclear Information System (INIS)

    Rossi Alvarez, C.

    1998-01-01

    The quality of the multidetector array EUROBALL is described, with emphasis on the history and formal organization of the related European collaboration. The detector layout is presented together with the electronics and Data Acquisition capabilities. The status of the instrument, its performances and the main features of some recently developed ancillary detectors will also be described. The EUROBALL array is operational in Legnaro National Laboratory (Italy) since April 1997 and is expected to run up to November 1998. The array represents a significant improvement in detector efficiency and sensitivity with respect to the previous generation of multidetector arrays

  18. Rectenna array measurement results

    Science.gov (United States)

    Dickinson, R. M.

    1980-01-01

    The measured performance characteristics of a rectenna array are reviewed and compared to the performance of a single element. It is shown that the performance may be extrapolated from the individual element to that of the collection of elements. Techniques for current and voltage combining were demonstrated. The array performance as a function of various operating parameters is characterized and techniques for overvoltage protection and automatic fault clearing in the array demonstrated. A method for detecting failed elements also exists. Instrumentation for deriving performance effectiveness is described. Measured harmonic radiation patterns and fundamental frequency scattered patterns for a low level illumination rectenna array are presented.

  19. Arrayed waveguide Sagnac interferometer.

    Science.gov (United States)

    Capmany, José; Muñoz, Pascual; Sales, Salvador; Pastor, Daniel; Ortega, Beatriz; Martinez, Alfonso

    2003-02-01

    We present a novel device, an arrayed waveguide Sagnac interferometer, that combines the flexibility of arrayed waveguides and the wide application range of fiber or integrated optics Sagnac loops. We form the device by closing an array of wavelength-selective light paths provided by two arrayed waveguides with a single 2 x 2 coupler in a Sagnac configuration. The equations that describe the device's operation in general conditions are derived. A preliminary experimental demonstration is provided of a fiber prototype in passive operation that shows good agreement with the expected theoretical performance. Potential applications of the device in nonlinear operation are outlined and discussed.

  20. Analysis and evalaution in the production process and equipment area of the low-cost solar array project. [including modifying gaseous diffusion and using ion implantation

    Science.gov (United States)

    Goldman, H.; Wolf, M.

    1979-01-01

    The manufacturing methods for photovoltaic solar energy utilization are assessed. Economic and technical data on the current front junction formation processes of gaseous diffusion and ion implantation are presented. Future proposals, including modifying gaseous diffusion and using ion implantation, to decrease the cost of junction formation are studied. Technology developments in current processes and an economic evaluation of the processes are included.

  1. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  2. Arraying proteins by cell-free synthesis.

    Science.gov (United States)

    He, Mingyue; Wang, Ming-Wei

    2007-10-01

    Recent advances in life science have led to great motivation for the development of protein arrays to study functions of genome-encoded proteins. While traditional cell-based methods have been commonly used for generating protein arrays, they are usually a time-consuming process with a number of technical challenges. Cell-free protein synthesis offers an attractive system for making protein arrays, not only does it rapidly converts the genetic information into functional proteins without the need for DNA cloning, but also presents a flexible environment amenable to production of folded proteins or proteins with defined modifications. Recent advancements have made it possible to rapidly generate protein arrays from PCR DNA templates through parallel on-chip protein synthesis. This article reviews current cell-free protein array technologies and their proteomic applications.

  3. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems.

    Science.gov (United States)

    Indiveri, Giacomo

    2008-09-03

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  4. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    Directory of Open Access Journals (Sweden)

    Giacomo Indiveri

    2008-09-01

    Full Text Available Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  5. A Novel Phase-Transformation Activation Process toward Ni-Mn-O Nanoprism Arrays for 2.4 V Ultrahigh-Voltage Aqueous Supercapacitors.

    Science.gov (United States)

    Zuo, Wenhua; Xie, Chaoyue; Xu, Pan; Li, Yuanyuan; Liu, Jinping

    2017-09-01

    One of the key challenges of aqueous supercapacitors is the relatively low voltage (0.8-2.0 V), which significantly limits the energy density and feasibility of practical applications of the device. Herein, this study reports a novel Ni-Mn-O solid-solution cathode to widen the supercapacitor device voltage, which can potentially suppress the oxygen evolution reaction and thus be operated stably within a quite wide potential window of 0-1.4 V (vs saturated calomel electrode) after a simple but unique phase-transformation electrochemical activation. The solid-solution structure is designed with an ordered array architecture and in situ nanocarbon modification to promote the charge/mass transfer kinetics. By paring with commercial activated carbon anode, an ultrahigh voltage asymmetric supercapacitor in neutral aqueous LiCl electrolyte is assembled (2.4 V; among the highest for single-cell supercapacitors). Moreover, by using a polyvinyl alcohol (PVA)-LiCl electrolyte, a 2.4 V hydrogel supercapacitor is further developed with an excellent Coulombic efficiency, good rate capability, and remarkable cycle life (>5000 cycles; 95.5% capacity retention). Only one cell can power the light-emitting diode indicator brightly. The resulting maximum volumetric energy density is 4.72 mWh cm -3 , which is much superior to previous thin-film manganese-oxide-based supercapacitors and even battery-supercapacitor hybrid devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A Low-Noise, Modular, and Versatile Analog Front-End Intended for Processing In Vitro Neuronal Signals Detected by Microelectrode Arrays

    Directory of Open Access Journals (Sweden)

    Giulia Regalia

    2015-01-01

    Full Text Available The collection of good quality extracellular neuronal spikes from neuronal cultures coupled to Microelectrode Arrays (MEAs is a binding requirement to gather reliable data. Due to physical constraints, low power requirement, or the need of customizability, commercial recording platforms are not fully adequate for the development of experimental setups integrating MEA technology with other equipment needed to perform experiments under climate controlled conditions, like environmental chambers or cell culture incubators. To address this issue, we developed a custom MEA interfacing system featuring low noise, low power, and the capability to be readily integrated inside an incubator-like environment. Two stages, a preamplifier and a filter amplifier, were designed, implemented on printed circuit boards, and tested. The system is characterized by a low input-referred noise (70 dB, and signal-to-noise ratio values of neuronal recordings comparable to those obtained with the benchmark commercial MEA system. In addition, the system was successfully integrated with an environmental MEA chamber, without harming cell cultures during experiments and without being damaged by the high humidity level. The devised system is of practical value in the development of in vitro platforms to study temporally extended neuronal network dynamics by means of MEAs.

  7. Future evolution of the Fast TracKer (FTK) processing unit

    CERN Document Server

    Gentsos, C; The ATLAS collaboration; Giannetti, P; Magalotti, D; Nikolaidis, S

    2014-01-01

    The Fast Tracker (FTK) processor [1] for the ATLAS experiment has a computing core made of 128 Processing Units that reconstruct tracks in the silicon detector in a ~100 μsec deep pipeline. The track parameter resolution provided by FTK enables the HLT trigger to identify efficiently and reconstruct significant samples of fermionic Higgs decays. Data processing speed is achieved with custom VLSI pattern recognition, linearized track fitting executed inside modern FPGAs, pipelining, and parallel processing. One large FPGA executes full resolution track fitting inside low resolution candidate tracks found by a set of 16 custom Asic devices, called Associative Memories (AM chips) [2]. The FTK dual structure, based on the cooperation of VLSI dedicated AM and programmable FPGAs, is maintained to achieve further technology performance, miniaturization and integration of the current state of the art prototypes. This allows to fully exploit new applications within and outside the High Energy Physics field. We plan t...

  8. A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory.

    Science.gov (United States)

    Chicca, E; Badoni, D; Dante, V; D'Andreagiovanni, M; Salina, G; Carota, L; Fusi, S; Del Giudice, P

    2003-01-01

    Electronic neuromorphic devices with on-chip, on-line learning should be able to modify quickly the synaptic couplings to acquire information about new patterns to be stored (synaptic plasticity) and, at the same time, preserve this information on very long time scales (synaptic stability). Here, we illustrate the electronic implementation of a simple solution to this stability-plasticity problem, recently proposed and studied in various contexts. It is based on the observation that reducing the analog depth of the synapses to the extreme (bistable synapses) does not necessarily disrupt the performance of the device as an associative memory, provided that 1) the number of neurons is large enough; 2) the transitions between stable synaptic states are stochastic; and 3) learning is slow. The drastic reduction of the analog depth of the synaptic variable also makes this solution appealing from the point of view of electronic implementation and offers a simple methodological alternative to the technological solution based on floating gates. We describe the full custom analog very large-scale integration (VLSI) realization of a small network of integrate-and-fire neurons connected by bistable deterministic plastic synapses which can implement the idea of stochastic learning. In the absence of stimuli, the memory is preserved indefinitely. During the stimulation the synapse undergoes quick temporary changes through the activities of the pre- and postsynaptic neurons; those changes stochastically result in a long-term modification of the synaptic efficacy. The intentionally disordered pattern of connectivity allows the system to generate a randomness suited to drive the stochastic selection mechanism. We check by a suitable stimulation protocol that the stochastic synaptic plasticity produces the expected pattern of potentiation and depression in the electronic network.

  9. Imaging spectroscopy using embedded diffractive optical arrays

    Science.gov (United States)

    Hinnrichs, Michele; Hinnrichs, Bradford

    2017-09-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera based on diffractive optic arrays. This approach to hyperspectral imaging has been demonstrated in all three infrared bands SWIR, MWIR and LWIR. The hyperspectral optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of this infrared hyperspectral sensor. This new and innovative approach to an infrared hyperspectral imaging spectrometer uses micro-optics that are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a small satellite, mini-UAV, commercial quadcopter or man portable. Also, an application of how this spectral imaging technology can easily be used to quantify the mass and volume flow rates of hydrocarbon gases. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. The detector array is divided into sub-images covered by each lenslet. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the number of simultaneous different spectral images collected each frame of the camera. A 2 x 2 lenslet array will image

  10. Triggering the GRANDE array

    International Nuclear Information System (INIS)

    Wilson, C.L.; Bratton, C.B.; Gurr, J.; Kropp, W.; Nelson, M.; Sobel, H.; Svoboda, R.; Yodh, G.; Burnett, T.; Chaloupka, V.; Wilkes, R.J.; Cherry, M.; Ellison, S.B.; Guzik, T.G.; Wefel, J.; Gaidos, J.; Loeffler, F.; Sembroski, G.; Goodman, J.; Haines, T.J.; Kielczewska, D.; Lane, C.; Steinberg, R.; Lieber, M.; Nagle, D.; Potter, M.; Tripp, R.

    1990-01-01

    A brief description of the Gamma Ray And Neutrino Detector Experiment (GRANDE) is presented. The detector elements and electronics are described. The trigger logic for the array is then examined. The triggers for the Gamma Ray and the Neutrino portions of the array are treated separately. (orig.)

  11. ISS Solar Array Management

    Science.gov (United States)

    Williams, James P.; Martin, Keith D.; Thomas, Justin R.; Caro, Samuel

    2010-01-01

    The International Space Station (ISS) Solar Array Management (SAM) software toolset provides the capabilities necessary to operate a spacecraft with complex solar array constraints. It monitors spacecraft telemetry and provides interpretations of solar array constraint data in an intuitive manner. The toolset provides extensive situational awareness to ensure mission success by analyzing power generation needs, array motion constraints, and structural loading situations. The software suite consists of several components including samCS (constraint set selector), samShadyTimers (array shadowing timers), samWin (visualization GUI), samLock (array motion constraint computation), and samJet (attitude control system configuration selector). It provides high availability and uptime for extended and continuous mission support. It is able to support two-degrees-of-freedom (DOF) array positioning and supports up to ten simultaneous constraints with intuitive 1D and 2D decision support visualizations of constraint data. Display synchronization is enabled across a networked control center and multiple methods for constraint data interpolation are supported. Use of this software toolset increases flight safety, reduces mission support effort, optimizes solar array operation for achieving mission goals, and has run for weeks at a time without issues. The SAM toolset is currently used in ISS real-time mission operations.

  12. Investigation of proposed process sequence for the array automated assembly task. Phase I and II. Final report, October 1, 1977-June 30, 1980

    Energy Technology Data Exchange (ETDEWEB)

    Mardesich, N.; Garcia, A.; Eskenas, K.

    1980-08-01

    A selected process sequence for the low cost fabrication of photovoltaic modules was defined during this contract. Each part of the process sequence was looked at regarding its contribution to the overall dollars per watt cost. During the course of the research done, some of the initially included processes were dropped due to technological deficiencies. The printed dielectric diffusion mask, codiffusion of the n+ and p+ regions, wraparound front contacts and retention of the diffusion oxide for use as an AR coating were all the processes that were removed for this reason. Other process steps were retained to achieve the desired overall cost and efficiency. Square wafers, a polymeric spin-on PX-10 diffusion source, a p+ back surface field and silver front contacts are all processes that have been recommended for use in this program. The printed silver solderable pad for making contact to the aluminum back was replaced by an ultrasonically applied tin-zinc pad. Also, the texturized front surface was dropped as inappropriate for the sheet silicon likely to be available in 1986. Progress has also been made on the process sequence for module fabrication. A shift from bonding with a conformal coating to laminating with ethylene vinyl acetate and a glass superstrate is recommended for further module fabrication. The finalized process sequence is described.

  13. Phased Array Ultrasonic Inspection of Titanium Forgings

    International Nuclear Information System (INIS)

    Howard, P.; Klaassen, R.; Kurkcu, N.; Barshinger, J.; Chalek, C.; Nieters, E.; Sun, Zongqi; Fromont, F. de

    2007-01-01

    Aerospace forging inspections typically use multiple, subsurface-focused sound beams in combination with digital C-scan image acquisition and display. Traditionally, forging inspections have been implemented using multiple single element, fixed focused transducers. Recent advances in phased array technology have made it possible to perform an equivalent inspection using a single phased array transducer. General Electric has developed a system to perform titanium forging inspection based on medical phased array technology and advanced image processing techniques. The components of that system and system performance for titanium inspection will be discussed

  14. rasdaman Array Database: current status

    Science.gov (United States)

    Merticariu, George; Toader, Alexandru

    2015-04-01

    rasdaman (Raster Data Manager) is a Free Open Source Array Database Management System which provides functionality for storing and processing massive amounts of raster data in the form of multidimensional arrays. The user can access, process and delete the data using SQL. The key features of rasdaman are: flexibility (datasets of any dimensionality can be processed with the help of SQL queries), scalability (rasdaman's distributed architecture enables it to seamlessly run on cloud infrastructures while offering an increase in performance with the increase of computation resources), performance (real-time access, processing, mixing and filtering of arrays of any dimensionality) and reliability (legacy communication protocol replaced with a new one based on cutting edge technology - Google Protocol Buffers and ZeroMQ). Among the data with which the system works, we can count 1D time series, 2D remote sensing imagery, 3D image time series, 3D geophysical data, and 4D atmospheric and climate data. Most of these representations cannot be stored only in the form of raw arrays, as the location information of the contents is also important for having a correct geoposition on Earth. This is defined by ISO 19123 as coverage data. rasdaman provides coverage data support through the Petascope service. Extensions were added on top of rasdaman in order to provide support for the Geoscience community. The following OGC standards are currently supported: Web Map Service (WMS), Web Coverage Service (WCS), and Web Coverage Processing Service (WCPS). The Web Map Service is an extension which provides zoom and pan navigation over images provided by a map server. Starting with version 9.1, rasdaman supports WMS version 1.3. The Web Coverage Service provides capabilities for downloading multi-dimensional coverage data. Support is also provided for several extensions of this service: Subsetting Extension, Scaling Extension, and, starting with version 9.1, Transaction Extension, which

  15. Study and Design of Differential Microphone Arrays

    CERN Document Server

    Benesty, Jacob

    2013-01-01

    Microphone arrays have attracted a lot of interest over the last few decades since they have the potential to solve many important problems such as noise reduction/speech enhancement, source separation, dereverberation, spatial sound recording, and source localization/tracking, to name a few. However, the design and implementation of microphone arrays with beamforming algorithms is not a trivial task when it comes to processing broadband signals such as speech. Indeed, in most sensor arrangements, the beamformer tends to have a frequency-dependent response. One exception, perhaps, is the family of differential microphone arrays (DMAs) that have the promise to form frequency-independent responses. Moreover, they have the potential to attain high directional gains with small and compact apertures. As a result, this type of microphone arrays has drawn much research and development attention recently. This book is intended to provide a systematic study of DMAs from a signal processing perspective. The primary obj...

  16. LOFAR, the low frequency array

    Science.gov (United States)

    Vermeulen, R. C.

    2012-09-01

    LOFAR, the Low Frequency Array, is a next-generation radio telescope designed by ASTRON, with antenna stations concentrated in the north of the Netherlands and currently spread into Germany, France, Sweden and the United Kingdom; plans for more LOFAR stations exist in several other countries. Utilizing a novel, phased-array design, LOFAR is optimized for the largely unexplored low frequency range between 30 and 240 MHz. Digital beam-forming techniques make the LOFAR system agile and allow for rapid re-pointing of the telescopes as well as the potential for multiple simultaneous observations. Processing (e.g. cross-correlation) takes place in the LOFAR BlueGene/P supercomputer, and associated post-processing facilities. With its dense core (inner few km) array and long (more than 1000 km) interferometric baselines, LOFAR reaches unparalleled sensitivity and resolution in the low frequency radio regime. The International LOFAR Telescope (ILT) is now issuing its first call for observing projects that will be peer reviewed and selected for observing starting in December. Part of the allocations will be made on the basis of a fully Open Skies policy; there are also reserved fractions assigned by national consortia in return for contributions from their country to the ILT. In this invited talk, the gradually expanding complement of operationally verified observing modes and capabilities are reviewed, and some of the exciting first astronomical results are presented.

  17. DNA electrophoresis through microlithographic arrays

    International Nuclear Information System (INIS)

    Sevick, E.M.; Williams, D.R.M.

    1996-01-01

    Electrophoresis is one of the most widely used techniques in biochemistry and genetics for size-separating charged molecular chains such as DNA or synthetic polyelectrolytes. The separation is achieved by driving the chains through a gel with an external electric field. As a result of the field and the obstacles that the medium provides, the chains have different mobilities and are physically separated after a given process time. The macroscopically observed mobility scales inversely with chain size: small molecules move through the medium quickly while larger molecules move more slowly. However, electrophoresis remains a tool that has yet to be optimised for most efficient size separation of polyelectrolytes, particularly large polyelectrolytes, e.g. DNA in excess of 30-50 kbp. Microlithographic arrays etched with an ordered pattern of obstacles provide an attractive alternative to gel media and provide wider avenues for size separation of polyelectrolytes and promote a better understanding of the separation process. Its advantages over gels are (1) the ordered array is durable and can be re-used, (2) the array morphology is ordered and can be standardized for specific separation, and (3) calibration with a marker polyelectrolyte is not required as the array is reproduced to high precision. Most importantly, the array geometry can be graduated along the chip so as to expand the size-dependent regime over larger chain lengths and postpone saturation. In order to predict the effect of obstacles upon the chain-length dependence in mobility and hence, size separation, we study the dynamics of single chains using theory and simulation. We present recent work describing: 1) the release kinetics of a single DNA molecule hooked around a point, frictionless obstacle and in both weak and strong field limits, 2) the mobility of a chain impinging upon point obstacles in an ordered array of obstacles, demonstrating the wide range of interactions possible between the chain and

  18. Silver Nanowire Arrays : Fabrication and Applications

    OpenAIRE

    Feng, Yuyi

    2016-01-01

    Nanowire arrays have increasingly received attention for their use in a variety of applications such as surface-enhanced Raman scattering (SERS), plasmonic sensing, and electrodes for photoelectric devices. However, until now, large scale fabrication of device-suitable metallic nanowire arrays on supporting substrates has seen very limited success. This thesis describes my work rst on the development of a novel successful processing route for the fabrication of uniform noble metallic (e.g. A...

  19. Introduction to adaptive arrays

    CERN Document Server

    Monzingo, Bob; Haupt, Randy

    2011-01-01

    This second edition is an extensive modernization of the bestselling introduction to the subject of adaptive array sensor systems. With the number of applications of adaptive array sensor systems growing each year, this look at the principles and fundamental techniques that are critical to these systems is more important than ever before. Introduction to Adaptive Arrays, 2nd Edition is organized as a tutorial, taking the reader by the hand and leading them through the maze of jargon that often surrounds this highly technical subject. It is easy to read and easy to follow as fundamental concept

  20. Piezoelectric transducer array microspeaker

    KAUST Repository

    Carreno, Armando Arpys Arevalo

    2016-12-19

    In this paper we present the fabrication and characterization of a piezoelectric micro-speaker. The speaker is an array of micro-machined piezoelectric membranes, fabricated on silicon wafer using advanced micro-machining techniques. Each array contains 2n piezoelectric transducer membranes, where “n” is the bit number. Every element of the array has a circular shape structure. The membrane is made out four layers: 300nm of platinum for the bottom electrode, 250nm or lead zirconate titanate (PZT), a top electrode of 300nm and a structural layer of 50

  1. Invasive tightly coupled processor arrays

    CERN Document Server

    LARI, VAHID

    2016-01-01

    This book introduces new massively parallel computer (MPSoC) architectures called invasive tightly coupled processor arrays. It proposes strategies, architecture designs, and programming interfaces for invasive TCPAs that allow invading and subsequently executing loop programs with strict requirements or guarantees of non-functional execution qualities such as performance, power consumption, and reliability. For the first time, such a configurable processor array architecture consisting of locally interconnected VLIW processing elements can be claimed by programs, either in full or in part, using the principle of invasive computing. Invasive TCPAs provide unprecedented energy efficiency for the parallel execution of nested loop programs by avoiding any global memory access such as GPUs and may even support loops with complex dependencies such as loop-carried dependencies that are not amenable to parallel execution on GPUs. For this purpose, the book proposes different invasion strategies for claiming a desire...

  2. Application of statistics to VLSI circuit manufacturing : test, diagnosis, and reliability

    NARCIS (Netherlands)

    Krishnan, Shaji

    2017-01-01

    Semiconductor product manufacturing companies strive to deliver defect free, and reliable products to their customers. However, with the down-scaling of technology, increasing the throughput at every stage of semiconductor product manufacturing becomes a harder challenge. To avoid process-related

  3. Protein Functionalized Nanodiamond Arrays

    Directory of Open Access Journals (Sweden)

    Liu YL

    2010-01-01

    Full Text Available Abstract Various nanoscale elements are currently being explored for bio-applications, such as in bio-images, bio-detection, and bio-sensors. Among them, nanodiamonds possess remarkable features such as low bio-cytotoxicity, good optical property in fluorescent and Raman spectra, and good photostability for bio-applications. In this work, we devise techniques to position functionalized nanodiamonds on self-assembled monolayer (SAMs arrays adsorbed on silicon and ITO substrates surface using electron beam lithography techniques. The nanodiamond arrays were functionalized with lysozyme to target a certain biomolecule or protein specifically. The optical properties of the nanodiamond-protein complex arrays were characterized by a high throughput confocal microscope. The synthesized nanodiamond-lysozyme complex arrays were found to still retain their functionality in interacting with E. coli.

  4. Photonic Crystal Nanocavity Arrays

    National Research Council Canada - National Science Library

    Altug, Hatice; Vuckovic, Jelena

    2006-01-01

    We recently proposed two-dimensional coupled photonic crystal nanocavity arrays as a route to achieve a slow-group velocity of light in all crystal directions, thereby enabling numerous applications...

  5. Carbon nanotube array actuators

    International Nuclear Information System (INIS)

    Geier, S; Mahrholz, T; Wierach, P; Sinapius, M

    2013-01-01

    Experimental investigations of highly vertically aligned carbon nanotubes (CNTs), also known as CNT-arrays, are the main focus of this paper. The free strain as result of an active material behavior is analyzed via a novel experimental setup. Previous test experiences of papers made of randomly oriented CNTs, also called Bucky-papers, reveal comparably low free strain. The anisotropy of aligned CNTs promises better performance. Via synthesis techniques like chemical vapor deposition (CVD) or plasma enhanced CVD (PECVD), highly aligned arrays of multi-walled carbon nanotubes (MWCNTs) are synthesized. Two different types of CNT-arrays are analyzed, morphologically first, and optically tested for their active characteristics afterwards. One type of the analyzed arrays features tube lengths of 750–2000 μm with a large variety of diameters between 20 and 50 nm and a wave-like CNT-shape. The second type features a maximum, almost uniform, length of 12 μm and a constant diameter of 50 nm. Different CNT-lengths and array types are tested due to their active behavior. As result of the presented tests, it is reported that the quality of orientation is the most decisive property for excellent active behavior. Due to their alignment, CNT-arrays feature the opportunity to clarify the actuation mechanism of architectures made of CNTs. (paper)

  6. ASIC Readout Circuit Architecture for Large Geiger Photodiode Arrays

    Science.gov (United States)

    Vasile, Stefan; Lipson, Jerold

    2012-01-01

    The objective of this work was to develop a new class of readout integrated circuit (ROIC) arrays to be operated with Geiger avalanche photodiode (GPD) arrays, by integrating multiple functions at the pixel level (smart-pixel or active pixel technology) in 250-nm CMOS (complementary metal oxide semiconductor) processes. In order to pack a maximum of functions within a minimum pixel size, the ROIC array is a full, custom application-specific integrated circuit (ASIC) design using a mixed-signal CMOS process with compact primitive layout cells. The ROIC array was processed to allow assembly in bump-bonding technology with photon-counting infrared detector arrays into 3-D imaging cameras (LADAR). The ROIC architecture was designed to work with either common- anode Si GPD arrays or common-cathode InGaAs GPD arrays. The current ROIC pixel design is hardwired prior to processing one of the two GPD array configurations, and it has the provision to allow soft reconfiguration to either array (to be implemented into the next ROIC array generation). The ROIC pixel architecture implements the Geiger avalanche quenching, bias, reset, and time to digital conversion (TDC) functions in full-digital design, and uses time domain over-sampling (vernier) to allow high temporal resolution at low clock rates, increased data yield, and improved utilization of the laser beam.

  7. Microfabricated hollow microneedle array using ICP etcher

    Science.gov (United States)

    Ji, Jing; Tay, Francis E. H.; Miao, Jianmin

    2006-04-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF6/O2 isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases.

  8. Microfabricated hollow microneedle array using ICP etcher

    International Nuclear Information System (INIS)

    Ji Jing; Tay, Francis E H; Miao Jianmin

    2006-01-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF 6 /O 2 isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases

  9. Microfabricated hollow microneedle array using ICP etcher

    Energy Technology Data Exchange (ETDEWEB)

    Ji Jing [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Tay, Francis E H [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Miao Jianmin [MicroMachines Center, School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, 639798 (Singapore)

    2006-04-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF{sub 6}/O{sub 2} isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases.

  10. Reliability data collection on IC and VLSI devices tested under accelerated life conditions

    International Nuclear Information System (INIS)

    Barry, D.M.; Meniconi, M.

    1986-01-01

    As part of a more general investigation into the reliability and failure causes of semiconductor devices, statistical samples of integrated circuit devices (LM741C) and dynamic random access memory devices (TMS4116) were tested destructively to failure using elevated temperature as the accelerating stress. The devices were operated during the life test and the failure data generated were collected automatically using a multiple question-and-answer program and a process control computer. The failure data were modelled from the lognormal, inverse Gaussian and Weibull distribution using an Arrhenius reaction rate model. The failed devices were later decapsulated for failure cause determination. (orig./DG)

  11. Ion distribution near a mask edge with arbitrary shape for VLSI IC applications

    International Nuclear Information System (INIS)

    Lutsch, A.G.K.; Oosthuizen, D.R.

    1985-01-01

    The profile of the mask edge during ion implantation determines the electrical field in the critical drain region of a MOS-transistor. Equal ion density lines are computed for various mask edges for the example of boron implanted into silicon at 70 keV. Four moments of the impurity depth distribution (without mask material are taken into consideration. Homogenisation and, therefore a higher noise immunity, can be obtained by the proper choice of the mask etching process. The influence of a too-thin mask material is also shown. (author)

  12. Study on transfer rule of chemical constituents of tianshu capsule in productive process by high-performance liquid chromatography coupled with diode-array detection and quadrupole time-of-flight tandem mass spectrometry

    International Nuclear Information System (INIS)

    Lian, Y.P.; Xie, D.W.; Li, Y.J.; Xiao, W.; Huang, W.Z.; Ding, G.

    2016-01-01

    To develop a sensitive and accurate method for the fingerprint study and transfer rule of chemical constituents from Ligusticum chuanxiong Hort and Gastrodia elata Blume to Tianshu capsule in productive process, a high performance liquid chromatography coupled with diode-array detection and electrospray ionization quadrupole time-of-flight mass spectrometry (LC/QTOF-MS) method was established for analysis. The reference fingerprints of aqueous extract intermediate of medicinal material, alcohol extract intermediate of medicinal material and Tianshu capsule was established. The methodology was studied and the similarity was more than 0.99. The chromatographic methods demonstrated a good precision, repeatability, stability, with relative standard deviations of less than 3 percent for relative retention time and relative peak area. According to these fingerprints, some chemical constituents in the fingerprints were identified or tentatively identified based on their retention time, exact molecular weight and literature. Among of them 26 constituents were from Ligusticum chuanxiong Hort and nine components were from Gastrodia elata Blume. 25 out of 26 compounds had entered a transfer process and 17 compounds were transferred from intermediates to the final preparation, the Tianshu capsule. Thus, it is reasonable enough to use this transfer process to demonstrate the production technology. To sum up, this method is sensitive, accurate and useful,and it could provide us an approach to evaluate the production technology of Tianshu capsule. (author)

  13. Blending of phased array data

    Science.gov (United States)

    Duijster, Arno; van Groenestijn, Gert-Jan; van Neer, Paul; Blacquière, Gerrit; Volker, Arno

    2018-04-01

    The use of phased arrays is growing in the non-destructive testing industry and the trend is towards large 2D arrays, but due to limitations, it is currently not possible to record the signals from all elements, resulting in aliased data. In the past, we have presented a data interpolation scheme `beyond spatial aliasing' to overcome this aliasing. In this paper, we present a different approach: blending and deblending of data. On the hardware side, groups of receivers are blended (grouped) in only a few transmit/recording channels. This allows for transmission and recording with all elements, in a shorter acquisition time and with less channels. On the data processing side, this blended data is deblended (separated) by transforming it to a different domain and applying an iterative filtering and thresholding. Two different filtering methods are compared: f-k filtering and wavefield extrapolation filtering. The deblending and filtering methods are demonstrated on simulated experimental data. The wavefield extrapolation filtering proves to outperform f-k filtering. The wavefield extrapolation method can deal with groups of up to 24 receivers, in a phased array of 48 × 48 elements.

  14. Low cost silicon solar array project. Task 1: Establishment of the feasibility of a process capable of low cost, high volume production of silane, SiH4

    Science.gov (United States)

    Breneman, W. C.; Mui, J. Y. P.

    1976-01-01

    The kinetics of the redistribution of dichlorosilane and trichlorosilane vapor over a tertiary amine ion exchange resin catalyst were investigated. The hydrogenation of SiCl4 to form HSiCl3 and the direct synthesis of H2SiCl2 from HCl gas and metallurgical silicon metal were also studied. The purification of SiH4 using activated carbon adsorbent was studied along with a process for storing SiH4 absorbed on carbon. The latter makes possible a higher volumetric efficiency than compressed gas storage. A mini-plant designed to produce ten pounds per day of SiH4 is described.

  15. Relation between film character and wafer alignment: critical alignment issues on HV device for VLSI manufacturing

    Science.gov (United States)

    Lo, Yi-Chuan; Lee, Chih-Hsiung; Lin, Hsun-Peng; Peng, Chiou-Shian

    1998-06-01

    Several continuous splits for wafer alignment target topography conditions to improve epitaxy film alignment were applied. The alignment evaluation among former layer pad oxide thickness (250 angstrom - 500 angstrom), drive oxide thickness (6000 angstrom - 10000 angstrom), nitride film thickness (600 angstrom - 1500 angstrom), initial oxide etch (fully wet etch, fully dry etch and dry plus wet etch) will be split to this experiment. Also various epitaxy deposition recipe such as: epitaxy source (SiHCl2 or SiCHCl3) and growth rate (1.3 micrometer/min approximately 2.0 micrometer/min) will be used to optimize the process window for alignment issue. All the reflectance signal and cross section photography of alignment target during NIKON stepper alignment process will be examined. Experimental results show epitaxy recipe plays an important role to wafer alignment. Low growth rate with good performance conformity epitaxy lead to alignment target avoid washout, pattern shift and distortion. All the results (signal monitor and film character) combined with NIKON's stepper standard laser scanning alignment system will be discussed in this paper.

  16. The Future Evolution of the Fast TracKer Processing Unit

    CERN Document Server

    Gentsos, Christos; The ATLAS collaboration; Magalotti, Daniel; Bertolucci, Federico; Citraro, Saverio; Kordas, Kostantinos; Nikolaidis, Spyridon

    2016-01-01

    Real time tracking is a key ingredient for online event selection at hadron colliders. The Silicon Vertex Tracker at the CDF experiment and the Fast Tracker (FTK) at ATLAS are two successful examples of the importance of dedicated hardware to reconstruct full events at hadron machines. We present the future evolution of this technology, for applications in the High Luminosity runs at the Large Hadron Collider (HL-LHC). Data processing speed is achieved with custom VLSI pattern recognition and linearized track fitting executed inside modern FPGAs, exploiting deep pipelining, extensive parallelism, and efficient use of available resources. In the current system, one large FPGA executed track fitting in full resolution inside low resolution candidate tracks found by a set of custom ASIC devices, called Associative Memories (AM chips). The FTK dual structure, based on the cooperation of VLSI AM and programmable FPGAs, is maintained, but we plan to increase the FPGA parallelism by associating one FPGA to each AM c...

  17. The Future Evolution of the Fast TracKer Processing Unit

    CERN Document Server

    Gentsos, Christos; The ATLAS collaboration; Magalotti, Daniel; Bertolucci, Federico; Citraro, Saverio; Kordas, Kostantinos; Nikolaidis, Spyridon

    2015-01-01

    Real time tracking is a key ingredient for online event selection at hadron colliders. The Silicon Vertex Tracker at the CDF experiment and the Fast Tracker (FTK) at ATLAS are two successful examples of the importance of dedicated hardware to reconstruct full events at hadron machines. We present the future evolution of this technology, for applications in the High Luminosity runs at the Large Hadron Collider (HL-LHC). Data processing speed is achieved with custom VLSI pattern recognition and linearized track fitting executed inside modern FPGAs, exploiting deep pipelining, extensive parallelism, and efficient use of available resources. In the current system, one large FPGA executed track fitting in full resolution inside low resolution candidate tracks found by a set of custom ASIC devices, called Associative Memories (AM chips). The FTK dual structure, based on the cooperation of VLSI AM and programmable FPGAs, is maintained, but we plan to increase the FPGA parallelism by associating one FPGA to each AM c...

  18. A VLSI front-end circuit for microstrip silicon detectors for medical imaging applications

    International Nuclear Information System (INIS)

    Beccherle, R.; Cisternino, A.; Guerra, A. Del; Folli, M.; Marchesini, R.; Bisogni, M.G.; Ceccopieri, A.; Rosso, V.; Stefanini, A.; Tripiccione, R.; Kipnis, I.

    1999-01-01

    An analog CMOS-Integrated Circuit has been developed as Front-End for a double-sided microstrip silicon detector. The IC processes and discriminates signals in the 5-30 keV energy range. Main features are low noise and precise timing information. Low noise is achieved by optimizing the cascoded integrator with the 8 pF detector capacitance and by using an inherently low noise 1.2 μm CMOS technology. Timing information is provided by a double discriminator architecture. The output of the circuit is a digital pulse. The leading edge is determined by a fixed threshold discriminator, while the trailing edge is provided by a zero crossing discriminator. In this paper we first describe the architecture of the Front-End chip. We then present the performance of the chip prototype in terms of noise, minimum discrimination threshold and time resolution

  19. Development of an integrated circuit VLSI used for time measurement and selective read out in the front end electronics of the DIRC for the Babar experience at SLAC; Developpement d'un circuit integre VLSI assurant mesure de temps et lecture selective dans l'electronique frontale du compteur DIRC de l'experience babar a slac

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1999-07-01

    This thesis deals with the design the development and the tests of an integrated circuit VLSI, supplying selective read and time measure for 16 channels. This circuit has been developed for a experiment of particles physics, BABAR, that will take place at SLAC (Stanford Linear Accelerator Center). A first part describes the physical stakes of the experiment, the electronic architecture and the place of the developed circuit in the research program. The second part presents the technical drawings of the circuit, the prototypes leading to the final design and the validity tests. (A.L.B.)

  20. An Approach to Maximize Weld Penetration During TIG Welding of P91 Steel Plates by Utilizing Image Processing and Taguchi Orthogonal Array

    Science.gov (United States)

    Singh, Akhilesh Kumar; Debnath, Tapas; Dey, Vidyut; Rai, Ram Naresh

    2017-10-01

    P-91 is modified 9Cr-1Mo steel. Fabricated structures and components of P-91 has a lot of application in power and chemical industry owing to its excellent properties like high temperature stress corrosion resistance, less susceptibility to thermal fatigue at high operating temperatures. The weld quality and surface finish of fabricated structure of P91 is very good when welded by Tungsten Inert Gas welding (TIG). However, the process has its limitation regarding weld penetration. The success of a welding process lies in fabricating with such a combination of parameters that gives maximum weld penetration and minimum weld width. To carry out an investigation on the effect of the autogenous TIG welding parameters on weld penetration and weld width, bead-on-plate welds were carried on P91 plates of thickness 6 mm in accordance to a Taguchi L9 design. Welding current, welding speed and gas flow rate were the three control variables in the investigation. After autogenous (TIG) welding, the dimension of the weld width, weld penetration and weld area were successfully measured by an image analysis technique developed for the study. The maximum error for the measured dimensions of the weld width, penetration and area with the developed image analysis technique was only 2 % compared to the measurements of Leica-Q-Win-V3 software installed in optical microscope. The measurements with the developed software, unlike the measurements under a microscope, required least human intervention. An Analysis of Variance (ANOVA) confirms the significance of the selected parameters. Thereafter, Taguchi's method was successfully used to trade-off between maximum penetration and minimum weld width while keeping the weld area at a minimum.

  1. Testing of focal plane arrays

    International Nuclear Information System (INIS)

    Merriam, J.D.

    1988-01-01

    Problems associated with the testing of focal plane arrays are briefly examined with reference to the instrumentation and measurement procedures. In particular, the approach and instrumentation used as the Naval Ocean Systems Center is presented. Most of the measurements are made with flooded illumination on the focal plane array. The array is treated as an ensemble of individual pixels, data being taken on each pixel and array averages and standard deviations computed for the entire array. Data maps are generated, showing the pixel data in the proper spatial position on the array and the array statistics

  2. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  3. VLSI systems energy management from a software perspective – A literature survey

    Directory of Open Access Journals (Sweden)

    Prasada Kumari K.S.

    2016-09-01

    Full Text Available The increasing demand for ultra-low power electronic systems has motivated research in device technology and hardware design techniques. Experimental studies have proved that the hardware innovations for power reduction are fully exploited only with the proper design of upper layer software. Also, the software power and energy modelling and analysis – the first step towards energy reduction is complex due to the inter and intra dependencies of processors, operating systems, application software, programming languages and compilers. The subject is too vast; the paper aims to give a consolidated view to researchers in arriving at solutions to power optimization problems from a software perspective. The review emphasizes the fact that software design and implementation is to be viewed from system energy conservation angle rather than as an isolated process. After covering a global view of end to end software based power reduction techniques for micro sensor nodes to High Performance Computing systems, specific design aspects related to battery powered Embedded computing for mobile and portable systems are addressed in detail. The findings are consolidated into 2 major categories – those related to research directions and those related to existing industry practices. The emerging concept of Green Software with specific focus on mainframe computing is also discussed in brief. Empirical results on power saving are included wherever available. The paper concludes that only with the close co-ordination between hardware architect, software architect and system architect low energy systems can be realized.

  4. A simple route to vertical array of quasi-1D ZnO nanofilms on FTO surfaces: 1D-crystal growth of nanoseeds under ammonia-assisted hydrolysis process

    Directory of Open Access Journals (Sweden)

    Abd Rahman Mohd Yusri

    2011-01-01

    Full Text Available Abstract A simple method for the synthesis of ZnO nanofilms composed of vertical array of quasi-1D ZnO nanostructures (quasi-NRs on the surface was demonstrated via a 1D crystal growth of the attached nanoseeds under a rapid hydrolysis process of zinc salts in the presence of ammonia at room temperature. In a typical procedure, by simply controlling the concentration of zinc acetate and ammonia in the reaction, a high density of vertically oriented nanorod-like morphology could be successfully obtained in a relatively short growth period (approximately 4 to 5 min and at a room-temperature process. The average diameter and the length of the nanostructures are approximately 30 and 110 nm, respectively. The as-prepared quasi-NRs products were pure ZnO phase in nature without the presence of any zinc complexes as confirmed by the XRD characterisation. Room-temperature optical absorption spectroscopy exhibits the presence of two separate excitonic characters inferring that the as-prepared ZnO quasi-NRs are high-crystallinity properties in nature. The mechanism of growth for the ZnO quasi-NRs will be proposed. Due to their simplicity, the method should become a potential alternative for a rapid and cost-effective preparation of high-quality ZnO quasi-NRs nanofilms for use in photovoltaic or photocatalytics applications. PACS: 81.07.Bc; 81.16.-c; 81.07.Gf.

  5. Computer-aided design of microfluidic very large scale integration (mVLSI) biochips design automation, testing, and design-for-testability

    CERN Document Server

    Hu, Kai; Ho, Tsung-Yi

    2017-01-01

    This book provides a comprehensive overview of flow-based, microfluidic VLSI. The authors describe and solve in a comprehensive and holistic manner practical challenges such as control synthesis, wash optimization, design for testability, and diagnosis of modern flow-based microfluidic biochips. They introduce practical solutions, based on rigorous optimization and formal models. The technical contributions presented in this book will not only shorten the product development cycle, but also accelerate the adoption and further development of modern flow-based microfluidic biochips, by facilitating the full exploitation of design complexities that are possible with current fabrication techniques. Offers the first practical problem formulation for automated control-layer design in flow-based microfluidic biochips and provides a systematic approach for solving this problem; Introduces a wash-optimization method for cross-contamination removal; Presents a design-for-testability (DfT) technique that can achieve 100...

  6. Wire Array Photovoltaics

    Science.gov (United States)

    Turner-Evans, Dan

    Over the past five years, the cost of solar panels has dropped drastically and, in concert, the number of installed modules has risen exponentially. However, solar electricity is still more than twice as expensive as electricity from a natural gas plant. Fortunately, wire array solar cells have emerged as a promising technology for further lowering the cost of solar. Si wire array solar cells are formed with a unique, low cost growth method and use 100 times less material than conventional Si cells. The wires can be embedded in a transparent, flexible polymer to create a free-standing array that can be rolled up for easy installation in a variety of form factors. Furthermore, by incorporating multijunctions into the wire morphology, higher efficiencies can be achieved while taking advantage of the unique defect relaxation pathways afforded by the 3D wire geometry. The work in this thesis shepherded Si wires from undoped arrays to flexible, functional large area devices and laid the groundwork for multijunction wire array cells. Fabrication techniques were developed to turn intrinsic Si wires into full p-n junctions and the wires were passivated with a-Si:H and a-SiNx:H. Single wire devices yielded open circuit voltages of 600 mV and efficiencies of 9%. The arrays were then embedded in a polymer and contacted with a transparent, flexible, Ni nanoparticle and Ag nanowire top contact. The contact connected >99% of the wires in parallel and yielded flexible, substrate free solar cells featuring hundreds of thousands of wires. Building on the success of the Si wire arrays, GaP was epitaxially grown on the material to create heterostructures for photoelectrochemistry. These cells were limited by low absorption in the GaP due to its indirect bandgap, and poor current collection due to a diffusion length of only 80 nm. However, GaAsP on SiGe offers a superior combination of materials, and wire architectures based on these semiconductors were investigated for multijunction

  7. A novel low-voltage low-power analogue VLSI implementation of neural networks with on-chip back-propagation learning

    Science.gov (United States)

    Carrasco, Manuel; Garde, Andres; Murillo, Pilar; Serrano, Luis

    2005-06-01

    In this paper a novel design and implementation of a VLSI Analogue Neural Net based on Multi-Layer Perceptron (MLP) with on-chip Back Propagation (BP) learning algorithm suitable for the resolution of classification problems is described. In order to implement a general and programmable analogue architecture, the design has been carried out in a hierarchical way. In this way the net has been divided in synapsis-blocks and neuron-blocks providing an easy method for the analysis. These blocks basically consist on simple cells, which are mainly, the activation functions (NAF), derivatives (DNAF), multipliers and weight update circuits. The analogue design is based on current-mode translinear techniques using MOS transistors working in the weak inversion region in order to reduce both the voltage supply and the power consumption. Moreover, with the purpose of minimizing the noise, offset and distortion of even order, the topologies are fully-differential and balanced. The circuit, named ANNE (Analogue Neural NEt), has been prototyped and characterized as a proof of concept on CMOS AMI-0.5A technology occupying a total area of 2.7mm2. The chip includes two versions of neural nets with on-chip BP learning algorithm, which are respectively a 2-1 and a 2-2-1 implementations. The proposed nets have been experimentally tested using supply voltages from 2.5V to 1.8V, which is suitable for single cell lithium-ion battery supply applications. Experimental results of both implementations included in ANNE exhibit a good performance on solving classification problems. These results have been compared with other proposed Analogue VLSI implementations of Neural Nets published in the literature demonstrating that our proposal is very efficient in terms of occupied area and power consumption.

  8. Coupling an aVLSI neuromorphic vision chip to a neurotrophic model of synaptic plasticity: the development of topography.

    Science.gov (United States)

    Elliott, Terry; Kramer, Jörg

    2002-10-01

    We couple a previously studied, biologically inspired neurotrophic model of activity-dependent competitive synaptic plasticity and neuronal development to a neuromorphic retina chip. Using this system, we examine the development and refinement of a topographic mapping between an array of afferent neurons (the retinal ganglion cells) and an array of target neurons. We find that the plasticity model can indeed drive topographic refinement in the presence of afferent activity patterns generated by a real-world device. We examine the resilience of the developing system to the presence of high levels of noise by adjusting the spontaneous firing rate of the silicon neurons.

  9. Si Wire-Array Solar Cells

    Science.gov (United States)

    Boettcher, Shannon

    2010-03-01

    Micron-scale Si wire arrays are three-dimensional photovoltaic absorbers that enable orthogonalization of light absorption and carrier collection and hence allow for the utilization of relatively impure Si in efficient solar cell designs. The wire arrays are grown by a vapor-liquid-solid-catalyzed process on a crystalline (111) Si wafer lithographically patterned with an array of metal catalyst particles. Following growth, such arrays can be embedded in polymethyldisiloxane (PDMS) and then peeled from the template growth substrate. The result is an unusual photovoltaic material: a flexible, bendable, wafer-thickness crystalline Si absorber. In this paper I will describe: 1. the growth of high-quality Si wires with controllable doping and the evaluation of their photovoltaic energy-conversion performance using a test electrolyte that forms a rectifying conformal semiconductor-liquid contact 2. the observation of enhanced absorption in wire arrays exceeding the conventional light trapping limits for planar Si cells of equivalent material thickness and 3. single-wire and large-area solid-state Si wire-array solar cell results obtained to date with directions for future cell designs based on optical and device physics. In collaboration with Michael Kelzenberg, Morgan Putnam, Joshua Spurgeon, Daniel Turner-Evans, Emily Warren, Nathan Lewis, and Harry Atwater, California Institute of Technology.

  10. A review of array radars

    Science.gov (United States)

    Brookner, E.

    1981-10-01

    Achievements in the area of array radars are illustrated by such activities as the operational deployment of the large high-power, high-range-resolution Cobra Dane; the operational deployment of two all-solid-state high-power, large UHF Pave Paws radars; and the development of the SAM multifunction Patriot radar. This paper reviews the following topics: array radars steered in azimuth and elevation by phase shifting (phase-phase steered arrays); arrays steered + or - 60 deg, limited scan arrays, hemispherical coverage, and omnidirectional coverage arrays; array radars steering electronically in only one dimension, either by frequency or by phase steering; and array radar antennas which use no electronic scanning but instead use array antennas for achieving low antenna sidelobes.

  11. Automated installation methods for photovoltaic arrays

    Science.gov (United States)

    Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.

    1982-11-01

    Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.

  12. Detector array and method

    International Nuclear Information System (INIS)

    Timothy, J.G.; Bybee, R.L.

    1978-01-01

    A detector array and method are described in which sets of electrode elements are provided. Each set consists of a number of linear extending parallel electrodes. The sets of electrode elements are disposed at an angle (preferably orthogonal) with respect to one another so that the individual elements intersect and overlap individual elements of the other sets. Electrical insulation is provided between the overlapping elements. The detector array is exposed to a source of charged particles which in accordance with one embodiment comprise electrons derived from a microchannel array plate exposed to photons. Amplifier and discriminator means are provided for each individual electrode element. Detection means are provided to sense pulses on individual electrode elements in the sets, with coincidence of pulses on individual intersecting electrode elements being indicative of charged particle impact at the intersection of the elements. Electronic readout means provide an indication of coincident events and the location where the charged particle or particles impacted. Display means are provided for generating appropriate displays representative of the intensity and locaton of charged particles impacting on the detector array

  13. Diode lasers and arrays

    International Nuclear Information System (INIS)

    Streifer, W.

    1988-01-01

    This paper discusses the principles of operation of III-V semiconductor diode lasers, the use of distributed feedback, and high power laser arrays. The semiconductor laser is a robust, miniature, versatile device, which directly converts electricity to light with very high efficiency. Applications to pumping solid-state lasers and to fiber optic and point-to-point communications are reviewed

  14. Array Theory and Nial

    DEFF Research Database (Denmark)

    Falster, Peter; Jenkins, Michael

    1999-01-01

    This report is the result of collaboration between the authors during the first 8 months of 1999 when M. Jenkins was visiting professor at DTU. The report documents the development of a tool for the investigation of array theory concepts and in particular presents various approaches to choose...

  15. Piezoelectric transducer array microspeaker

    KAUST Repository

    Carreno, Armando Arpys Arevalo; Conchouso Gonzalez, David; Castro, David; Kosel, Jü rgen; Foulds, Ian G.

    2016-01-01

    contains 2n piezoelectric transducer membranes, where “n” is the bit number. Every element of the array has a circular shape structure. The membrane is made out four layers: 300nm of platinum for the bottom electrode, 250nm or lead zirconate titanate (PZT

  16. VLSI Architecture and Design

    OpenAIRE

    Johnsson, Lennart

    1980-01-01

    Integrated circuit technology is rapidly approaching a state where feature sizes of one micron or less are tractable. Chip sizes are increasing slowly. These two developments result in considerably increased complexity in chip design. The physical characteristics of integrated circuit technology are also changing. The cost of communication will be dominating making new architectures and algorithms both feasible and desirable. A large number of processors on a single chip will be possible....

  17. Transformational VLSI Design

    DEFF Research Database (Denmark)

    Rasmussen, Ole Steen

    constructed. It contains a semantical embedding of Ruby in Zermelo-Fraenkel set theory (ZF) implemented in the Isabelle theorem prover. A small subset of Ruby, called Pure Ruby, is embedded as a conservative extension of ZF and characterised by an inductive definition. Many useful structures used...

  18. Princeton VLSI Project.

    Science.gov (United States)

    1983-01-01

    for otherwise, since sc = xs2 . we would ELIE system. This algorithm also applies to SL) systems have been able to compute zec without looking at block...Prof. Peter R. Cappello of the CompuLer Science Department, University of California, Santa Barbara, I I Im i Ii - 19 - Caiifo:nia. Some of the work...multiple pro- cessors will not be as simple as the MMM ones. Acknowledgments. Several useful ideas and suggestions were made by Jim Gray, Peter Honneyman

  19. Micromirror array nanostructures for anticounterfeiting applications

    Science.gov (United States)

    Lee, Robert A.

    2004-06-01

    The optical characteristics of pixellated passive micro mirror arrays are derived and applied in the context of their use as reflective optically variable device (OVD) nanostructures for the protection of documents from counterfeiting. The traditional design variables of foil based diffractive OVDs are shown to be able to be mapped to a corresponding set of design parameters for reflective optical micro mirror array (OMMA) devices. The greatly increased depth characteristics of micro mirror array OVDs provides an opportunity for directly printing the OVD microstructure onto the security document in-line with the normal printing process. The micro mirror array OVD architecture therefore eliminates the need for hot stamping foil as the carrier of the OVD information, thereby reducing costs. The origination of micro mirror array devices via a palette based data format and a combination electron beam lithography and photolithography techniques is discussed via an artwork example and experimental tests. Finally the application of the technology to the design of a generic class of devices which have the interesting property of allowing for both application and customer specific OVD image encoding and data encoding at the end user stage of production is described. Because of the end user nature of the image and data encoding process these devices are particularly well suited to ID document applications and for this reason we refer this new OVD concept as biometric OVD technology.

  20. Subband Energy Detection in Passive Array Processing

    National Research Council Canada - National Science Library

    Bono, Michael

    2000-01-01

    ...), which includes both Subband Peak Energy Detection (SPED) and Subband Extrema Energy Detection (SEED). It will be shown that SED has several performance advantages over Conventional Energy Detection...