WorldWideScience

Sample records for vlsi complexity results

  1. Lithography requirements in complex VLSI device fabrication

    International Nuclear Information System (INIS)

    Wilson, A.D.

    1985-01-01

    Fabrication of complex very large scale integration (VLSI) circuits requires continual advances in lithography to satisfy: decreasing minimum linewidths, larger chip sizes, tighter linewidth and overlay control, increasing topography to linewidth ratios, higher yield demands, increased throughput, harsher device processing, lower lithography cost, and a larger part number set with quick turn-around time. Where optical, electron beam, x-ray, and ion beam lithography can be applied to judiciously satisfy the complex VLSI circuit fabrication requirements is discussed and those areas that are in need of major further advances are addressed. Emphasis will be placed on advanced electron beam and storage ring x-ray lithography

  2. VLSI design

    CERN Document Server

    Basu, D K

    2014-01-01

    Very Large Scale Integrated Circuits (VLSI) design has moved from costly curiosity to an everyday necessity, especially with the proliferated applications of embedded computing devices in communications, entertainment and household gadgets. As a result, more and more knowledge on various aspects of VLSI design technologies is becoming a necessity for the engineering/technology students of various disciplines. With this goal in mind the course material of this book has been designed to cover the various fundamental aspects of VLSI design, like Categorization and comparison between various technologies used for VLSI design Basic fabrication processes involved in VLSI design Design of MOS, CMOS and Bi CMOS circuits used in VLSI Structured design of VLSI Introduction to VHDL for VLSI design Automated design for placement and routing of VLSI systems VLSI testing and testability The various topics of the book have been discussed lucidly with analysis, when required, examples, figures and adequate analytical and the...

  3. VLSI Architecture for Configurable and Low-Complexity Design of Hard-Decision Viterbi Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2016-06-01

    Full Text Available Convolutional encoding and data decoding are fundamental processes in convolutional error correction. One of the most popular error correction methods in decoding is the Viterbi algorithm. It is extensively implemented in many digital communication applications. Its VLSI design challenges are about area, speed, power, complexity and configurability. In this research, we specifically propose a VLSI architecture for a configurable and low-complexity design of a hard-decision Viterbi decoding algorithm. The configurable and low-complexity design is achieved by designing a generic VLSI architecture, optimizing each processing element (PE at the logical operation level and designing a conditional adapter. The proposed design can be configured for any predefined number of trace-backs, only by changing the trace-back parameter value. Its computational process only needs N + 2 clock cycles latency, with N is the number of trace-backs. Its configurability function has been proven for N = 8, N = 16, N = 32 and N = 64. Furthermore, the proposed design was synthesized and evaluated in Xilinx and Altera FPGA target boards for area consumption and speed performance.

  4. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  5. VLSI design

    CERN Document Server

    Einspruch, Norman G

    1986-01-01

    VLSI Electronics Microstructure Science, Volume 14: VLSI Design presents a comprehensive exposition and assessment of the developments and trends in VLSI (Very Large Scale Integration) electronics. This volume covers topics that range from microscopic aspects of materials behavior and device performance to the comprehension of VLSI in systems applications. Each article is prepared by a recognized authority. The subjects discussed in this book include VLSI processor design methodology; the RISC (Reduced Instruction Set Computer); the VLSI testing program; silicon compilers for VLSI; and special

  6. Towards an Analogue Neuromorphic VLSI Instrument for the Sensing of Complex Odours

    Science.gov (United States)

    Ab Aziz, Muhammad Fazli; Harun, Fauzan Khairi Che; Covington, James A.; Gardner, Julian W.

    2011-09-01

    Almost all electronic nose instruments reported today employ pattern recognition algorithms written in software and run on digital processors, e.g. micro-processors, microcontrollers or FPGAs. Conversely, in this paper we describe the analogue VLSI implementation of an electronic nose through the design of a neuromorphic olfactory chip. The modelling, design and fabrication of the chip have already been reported. Here a smart interface has been designed and characterised for thisneuromorphic chip. Thus we can demonstrate the functionality of the a VLSI neuromorphic chip, producing differing principal neuron firing patterns to real sensor response data. Further work is directed towards integrating 9 separate neuromorphic chips to create a large neuronal network to solve more complex olfactory problems.

  7. VLSI Implementation of a Fixed-Complexity Soft-Output MIMO Detector for High-Speed Wireless

    Directory of Open Access Journals (Sweden)

    Di Wu

    2010-01-01

    Full Text Available This paper presents a low-complexity MIMO symbol detector with close-Maximum a posteriori performance for the emerging multiantenna enhanced high-speed wireless communications. The VLSI implementation is based on a novel MIMO detection algorithm called Modified Fixed-Complexity Soft-Output (MFCSO detection, which achieves a good trade-off between performance and implementation cost compared to the referenced prior art. By including a microcode-controlled channel preprocessing unit and a pipelined detection unit, it is flexible enough to cover several different standards and transmission schemes. The flexibility allows adaptive detection to minimize power consumption without degradation in throughput. The VLSI implementation of the detector is presented to show that real-time MIMO symbol detection of 20 MHz bandwidth 3GPP LTE and 10 MHz WiMAX downlink physical channel is achievable at reasonable silicon cost.

  8. VLSI design

    CERN Document Server

    Chandrasetty, Vikram Arkalgud

    2011-01-01

    This book provides insight into the practical design of VLSI circuits. It is aimed at novice VLSI designers and other enthusiasts who would like to understand VLSI design flows. Coverage includes key concepts in CMOS digital design, design of DSP and communication blocks on FPGAs, ASIC front end and physical design, and analog and mixed signal design. The approach is designed to focus on practical implementation of key elements of the VLSI design process, in order to make the topic accessible to novices. The design concepts are demonstrated using software from Mathworks, Xilinx, Mentor Graphic

  9. VLSI in medicine

    CERN Document Server

    Einspruch, Norman G

    1989-01-01

    VLSI Electronics Microstructure Science, Volume 17: VLSI in Medicine deals with the more important applications of VLSI in medical devices and instruments.This volume is comprised of 11 chapters. It begins with an article about medical electronics. The following three chapters cover diagnostic imaging, focusing on such medical devices as magnetic resonance imaging, neurometric analyzer, and ultrasound. Chapters 5, 6, and 7 present the impact of VLSI in cardiology. The electrocardiograph, implantable cardiac pacemaker, and the use of VLSI in Holter monitoring are detailed in these chapters. The

  10. VLSI electronics microstructure science

    CERN Document Server

    1982-01-01

    VLSI Electronics: Microstructure Science, Volume 4 reviews trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the silicon-on-insulator for VLSI and VHSIC, X-ray lithography, and transient response of electron transport in GaAs using the Monte Carlo method. The technology and manufacturing of high-density magnetic-bubble memories, metallic superlattices, challenge of education for VLSI, and impact of VLSI on medical signal processing are also elaborated. This text likewise covers the impact of VLSI t

  11. VLSI electronics microstructure science

    CERN Document Server

    1981-01-01

    VLSI Electronics: Microstructure Science, Volume 3 evaluates trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the impact of VLSI on computer architectures; VLSI design and design aid requirements; and design, fabrication, and performance of CCD imagers. The approaches, potential, and progress of ultra-high-speed GaAs VLSI; computer modeling of MOSFETs; and numerical physics of micron-length and submicron-length semiconductor devices are also elaborated. This text likewise covers the optical linewi

  12. Design Implementation and Testing of a VLSI High Performance ASIC for Extracting the Phase of a Complex Signal

    National Research Council Canada - National Science Library

    Altmeyer, Ronald

    2002-01-01

    This thesis documents the research, circuit design, and simulation testing of a VLSI ASIC which extracts phase angle information from a complex sampled signal using the arctangent relationship: (phi=tan/-1 (Q/1...

  13. Multi-valued LSI/VLSI logic design

    Science.gov (United States)

    Santrakul, K.

    A procedure for synthesizing any large complex logic system, such as LSI and VLSI integrated circuits is described. This scheme uses Multi-Valued Multi-plexers (MVMUX) as the basic building blocks and the tree as the structure of the circuit realization. Simple built-in test circuits included in the network (the main current), provide a thorough functional checking of the network at any time. In brief, four major contributions are made: (1) multi-valued Algorithmic State Machine (ASM) chart for describing an LSI/VLSI behavior; (2) a tree-structured multi-valued multiplexer network which can be obtained directly from an ASM chart; (3) a heuristic tree-structured synthesis method for realizing any combinational logic with minimal or nearly-minimal MVMUX; and (4) a hierarchical design of LSI/VLSI with built-in parallel testing capability.

  14. A new VLSI complex integer multiplier which uses a quadratic-polynomial residue system with Fermat numbers

    Science.gov (United States)

    Shyu, H. C.; Reed, I. S.; Truong, T. K.; Hsu, I. S.; Chang, J. J.

    1987-01-01

    A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-Pw technology.

  15. Hybrid VLSI/QCA Architecture for Computing FFTs

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  16. Las Vegas is better than determinism in VLSI and distributed computing

    DEFF Research Database (Denmark)

    Mehlhorn, Kurt; Schmidt, Erik Meineche

    1982-01-01

    In this paper we describe a new method for proving lower bounds on the complexity of VLSI - computations and more generally distributed computations. Lipton and Sedgewick observed that the crossing sequence arguments used to prove lower bounds in VLSI (or TM or distributed computing) apply to (ac...

  17. Plasma processing for VLSI

    CERN Document Server

    Einspruch, Norman G

    1984-01-01

    VLSI Electronics: Microstructure Science, Volume 8: Plasma Processing for VLSI (Very Large Scale Integration) discusses the utilization of plasmas for general semiconductor processing. It also includes expositions on advanced deposition of materials for metallization, lithographic methods that use plasmas as exposure sources and for multiple resist patterning, and device structures made possible by anisotropic etching.This volume is divided into four sections. It begins with the history of plasma processing, a discussion of some of the early developments and trends for VLSI. The second section

  18. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  19. Application of evolutionary algorithms for multi-objective optimization in VLSI and embedded systems

    CERN Document Server

    2015-01-01

    This book describes how evolutionary algorithms (EA), including genetic algorithms (GA) and particle swarm optimization (PSO) can be utilized for solving multi-objective optimization problems in the area of embedded and VLSI system design. Many complex engineering optimization problems can be modelled as multi-objective formulations. This book provides an introduction to multi-objective optimization using meta-heuristic algorithms, GA and PSO, and how they can be applied to problems like hardware/software partitioning in embedded systems, circuit partitioning in VLSI, design of operational amplifiers in analog VLSI, design space exploration in high-level synthesis, delay fault testing in VLSI testing, and scheduling in heterogeneous distributed systems. It is shown how, in each case, the various aspects of the EA, namely its representation, and operators like crossover, mutation, etc. can be separately formulated to solve these problems. This book is intended for design engineers and researchers in the field ...

  20. Lithography for VLSI

    CERN Document Server

    Einspruch, Norman G

    1987-01-01

    VLSI Electronics Microstructure Science, Volume 16: Lithography for VLSI treats special topics from each branch of lithography, and also contains general discussion of some lithographic methods.This volume contains 8 chapters that discuss the various aspects of lithography. Chapters 1 and 2 are devoted to optical lithography. Chapter 3 covers electron lithography in general, and Chapter 4 discusses electron resist exposure modeling. Chapter 5 presents the fundamentals of ion-beam lithography. Mask/wafer alignment for x-ray proximity printing and for optical lithography is tackled in Chapter 6.

  1. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  2. Surface and interface effects in VLSI

    CERN Document Server

    Einspruch, Norman G

    1985-01-01

    VLSI Electronics Microstructure Science, Volume 10: Surface and Interface Effects in VLSI provides the advances made in the science of semiconductor surface and interface as they relate to electronics. This volume aims to provide a better understanding and control of surface and interface related properties. The book begins with an introductory chapter on the intimate link between interfaces and devices. The book is then divided into two parts. The first part covers the chemical and geometric structures of prototypical VLSI interfaces. Subjects detailed include, the technologically most import

  3. First results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Anzivino, G.; Horisberger, R.; Hubbeling, L.; Hyams, B.; Parker, S.; Breakstone, A.; Litke, A.M.; Walker, J.T.; Bingefors, N.

    1986-01-01

    A 256-strip silicon detector with 25 μm strip pitch, connected to two 128-channel NMOS VLSI chips (Microplex), has been tested using straight-through tracks from a ruthenium beta source. The readout channels have a pitch of 47.5 μm. A single multiplexed output provides voltages proportional to the integrated charge from each strip. The most probable signal height from the beta traversals is approximately 14 times the rms noise in any single channel. (orig.)

  4. Artificial immune system algorithm in VLSI circuit configuration

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.

  5. Electro-optic techniques for VLSI interconnect

    Science.gov (United States)

    Neff, J. A.

    1985-03-01

    A major limitation to achieving significant speed increases in very large scale integration (VLSI) lies in the metallic interconnects. They are costly not only from the charge transport standpoint but also from capacitive loading effects. The Defense Advanced Research Projects Agency, in pursuit of the fifth generation supercomputer, is investigating alternatives to the VLSI metallic interconnects, especially the use of optical techniques to transport the information either inter or intrachip. As the on chip performance of VLSI continues to improve via the scale down of the logic elements, the problems associated with transferring data off and onto the chip become more severe. The use of optical carriers to transfer the information within the computer is very appealing from several viewpoints. Besides the potential for gigabit propagation rates, the conversion from electronics to optics conveniently provides a decoupling of the various circuits from one another. Significant gains will also be realized in reducing cross talk between the metallic routings, and the interconnects need no longer be constrained to the plane of a thin film on the VLSI chip. In addition, optics can offer an increased programming flexibility for restructuring the interconnect network.

  6. VLSI implementations for image communications

    CERN Document Server

    Pirsch, P

    1993-01-01

    The past few years have seen a rapid growth in image processing and image communication technologies. New video services and multimedia applications are continuously being designed. Essential for all these applications are image and video compression techniques. The purpose of this book is to report on recent advances in VLSI architectures and their implementation for video signal processing applications with emphasis on video coding for bit rate reduction. Efficient VLSI implementation for video signal processing spans a broad range of disciplines involving algorithms, architectures, circuits

  7. Technology computer aided design simulation for VLSI MOSFET

    CERN Document Server

    Sarkar, Chandan Kumar

    2013-01-01

    Responding to recent developments and a growing VLSI circuit manufacturing market, Technology Computer Aided Design: Simulation for VLSI MOSFET examines advanced MOSFET processes and devices through TCAD numerical simulations. The book provides a balanced summary of TCAD and MOSFET basic concepts, equations, physics, and new technologies related to TCAD and MOSFET. A firm grasp of these concepts allows for the design of better models, thus streamlining the design process, saving time and money. This book places emphasis on the importance of modeling and simulations of VLSI MOS transistors and

  8. Wavelength-encoded OCDMA system using opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  9. Wavelength-encoded OCDMA system using opto-VLSI processors

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  10. Parallel VLSI Architecture

    Science.gov (United States)

    Truong, T. K.; Reed, I.; Yeh, C.; Shao, H.

    1985-01-01

    Fermat number transformation convolutes two digital data sequences. Very-large-scale integration (VLSI) applications, such as image and radar signal processing, X-ray reconstruction, and spectrum shaping, linear convolution of two digital data sequences of arbitrary lenghts accomplished using Fermat number transform (ENT).

  11. NASA Space Engineering Research Center for VLSI systems design

    Science.gov (United States)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  12. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    International Nuclear Information System (INIS)

    Jian Haifang; Shi Yin

    2009-01-01

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  13. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    Energy Technology Data Exchange (ETDEWEB)

    Jian Haifang; Shi Yin, E-mail: jhf@semi.ac.c [Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083 (China)

    2009-07-15

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  14. Using Software Technology to Specify Abstract Interfaces in VLSI Design.

    Science.gov (United States)

    1985-01-01

    with the complexity lev- els inherent in VLSI design, in that they can capitalize on their foundations in discrete mathemat- ics and the theory of...basis, rather than globally. Such a partitioning of module semantics makes the specification easier to construct and verify intelectual !y; it also...access function definitions. A standard language improves executability characteristics by capitalizing on portable, optimized system software developed

  15. Compact MOSFET models for VLSI design

    CERN Document Server

    Bhattacharyya, A B

    2009-01-01

    Practicing designers, students, and educators in the semiconductor field face an ever expanding portfolio of MOSFET models. In Compact MOSFET Models for VLSI Design , A.B. Bhattacharyya presents a unified perspective on the topic, allowing the practitioner to view and interpret device phenomena concurrently using different modeling strategies. Readers will learn to link device physics with model parameters, helping to close the gap between device understanding and its use for optimal circuit performance. Bhattacharyya also lays bare the core physical concepts that will drive the future of VLSI.

  16. Opto-VLSI-based reconfigurable free-space optical interconnects architecture

    DEFF Research Database (Denmark)

    Aljada, Muhsen; Alameh, Kamal; Chung, Il-Sug

    2007-01-01

    is the Opto-VLSI processor which can be driven by digital phase steering and multicasting holograms that reconfigure the optical interconnects between the input and output ports. The optical interconnects architecture is experimentally demonstrated at 2.5 Gbps using high-speed 1×3 VCSEL array and 1......×3 photoreceiver array in conjunction with two 1×4096 pixel Opto-VLSI processors. The minimisation of the crosstalk between the output ports is achieved by appropriately aligning the VCSEL and PD elements with respect to the Opto-VLSI processors and driving the latter with optimal steering phase holograms....

  17. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  18. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    National Research Council Canada - National Science Library

    Horiuchi, Timothy K; Krishnaprasad, P. S

    2007-01-01

    .... This includes multiple efforts related to a VLSI-based echolocation system being developed in one of our laboratories from algorithm development, bat flight data analysis, to VLSI circuit design...

  19. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  20. VLSI signal processing technology

    CERN Document Server

    Swartzlander, Earl

    1994-01-01

    This book is the first in a set of forthcoming books focussed on state-of-the-art development in the VLSI Signal Processing area. It is a response to the tremendous research activities taking place in that field. These activities have been driven by two factors: the dramatic increase in demand for high speed signal processing, especially in consumer elec­ tronics, and the evolving microelectronic technologies. The available technology has always been one of the main factors in determining al­ gorithms, architectures, and design strategies to be followed. With every new technology, signal processing systems go through many changes in concepts, design methods, and implementation. The goal of this book is to introduce the reader to the main features of VLSI Signal Processing and the ongoing developments in this area. The focus of this book is on: • Current developments in Digital Signal Processing (DSP) pro­ cessors and architectures - several examples and case studies of existing DSP chips are discussed in...

  1. Nano lasers in photonic VLSI

    NARCIS (Netherlands)

    Hill, M.T.; Oei, Y.S.; Smit, M.K.

    2007-01-01

    We examine the use of micro and nano lasers to form digital photonic VLSI building blocks. Problems such as isolation and cascading of building blocks are addressed, and the potential of future nano lasers explored.

  2. Ant System-Corner Insertion Sequence: An Efficient VLSI Hard Module Placer

    Directory of Open Access Journals (Sweden)

    HOO, C.-S.

    2013-02-01

    Full Text Available Placement is important in VLSI physical design as it determines the time-to-market and chip's reliability. In this paper, a new floorplan representation which couples with Ant System, namely Corner Insertion Sequence (CIS is proposed. Though CIS's search complexity is smaller than the state-of-the-art representation Corner Sequence (CS, CIS adopts a preset boundary on the placement and hence, leading to search bound similar to CS. This enables the previous unutilized corner edges to become viable. Also, the redundancy of CS representation is eliminated in CIS leads to a lower search complexity of CIS. Experimental results on Microelectronics Center of North Carolina (MCNC hard block benchmark circuits show that the proposed algorithm performs comparably in terms of area yet at least two times faster than CS.

  3. VLSI and system architecture-the new development of system 5G

    Energy Technology Data Exchange (ETDEWEB)

    Sakamura, K.; Sekino, A.; Kodaka, T.; Uehara, T.; Aiso, H.

    1982-01-01

    A research and development proposal is presented for VLSI CAD systems and for a hardware environment called system 5G on which the VLSI CAD systems run. The proposed CAD systems use a hierarchically organized design language to enable design of anything from basic architectures of VLSI to VLSI mask patterns in a uniform manner. The cad systems will eventually become intelligent cad systems that acquire design knowledge and perform automatic design of VLSI chips when the characteristic requirements of VLSI chip is given. System 5G will consist of superinference machines and the 5G communication network. The superinference machine will be built based on a functionally distributed architecture connecting inferommunication network. The superinference machine will be built based on a functionally distributed architecture connecting inference machines and relational data base machines via a high-speed local network. The transfer rate of the local network will be 100 mbps at the first stage of the project and will be improved to 1 gbps. Remote access to the superinference machine will be possible through the 5G communication network. Access to system 5G will use the 5G network architecture protocol. The users will access the system 5G using standardized 5G personal computers. 5G personal logic programming stations, very high intelligent terminals providing an instruction set that supports predicate logic and input/output facilities for audio and graphical information.

  4. Initial beam test results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Adolphsen, C.; Litke, A.; Schwarz, A.

    1986-01-01

    Silicon detectors with 256 strips, having a pitch of 25 μm, and connected to two 128 channel NMOS VLSI chips each (Microplex), have been tested in relativistic charged particle beams at CERN and at the Stanford Linear Accelerator Center. The readout chips have an input channel pitch of 47.5 μm and a single multiplexed output which provides voltages proportional to the integrated charge from each strip. The most probable signal height from minimum ionizing tracks was 15 times the rms noise in any single channel. Two-track traversals with a separation of 100 μm were cleanly resolved

  5. VLSI scaling methods and low power CMOS buffer circuit

    International Nuclear Information System (INIS)

    Sharma Vijay Kumar; Pattanaik Manisha

    2013-01-01

    Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)

  6. Handbook of VLSI chip design and expert systems

    CERN Document Server

    Schwarz, A F

    1993-01-01

    Handbook of VLSI Chip Design and Expert Systems provides information pertinent to the fundamental aspects of expert systems, which provides a knowledge-based approach to problem solving. This book discusses the use of expert systems in every possible subtask of VLSI chip design as well as in the interrelations between the subtasks.Organized into nine chapters, this book begins with an overview of design automation, which can be identified as Computer-Aided Design of Circuits and Systems (CADCAS). This text then presents the progress in artificial intelligence, with emphasis on expert systems.

  7. VLSI micro- and nanophotonics science, technology, and applications

    CERN Document Server

    Lee, El-Hang; Razeghi, Manijeh; Jagadish, Chennupati

    2011-01-01

    Addressing the growing demand for larger capacity in information technology, VLSI Micro- and Nanophotonics: Science, Technology, and Applications explores issues of science and technology of micro/nano-scale photonics and integration for broad-scale and chip-scale Very Large Scale Integration photonics. This book is a game-changer in the sense that it is quite possibly the first to focus on ""VLSI Photonics"". Very little effort has been made to develop integration technologies for micro/nanoscale photonic devices and applications, so this reference is an important and necessary early-stage pe

  8. Pursuit, Avoidance, and Cohesion in Flight: Multi-Purpose Control Laws and Neuromorphic VLSI

    Science.gov (United States)

    2010-10-01

    spatial navigation in mammals. We have designed, fabricated, and are now testing a neuromorphic VLSI chip that implements a spike-based, attractor...Control Laws and Neuromorphic VLSI 5a. CONTRACT NUMBER 070402-7705 5b. GRANT NUMBER FA9550-07-1-0446 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...implementations (custom Neuromorphic VLSI and robotics) we will apply important practical constraints that can lead to deeper insight into how and why efficient

  9. DPL/Daedalus design environment (for VLSI)

    Energy Technology Data Exchange (ETDEWEB)

    Batali, J; Mayle, N; Shrobe, H; Sussman, G; Weise, D

    1981-01-01

    The DPL/Daedalus design environment is an interactive VLSI design system implemented at the MIT Artificial Intelligence Laboratory. The system consists of several components: a layout language called DPL (for design procedure language); an interactive graphics facility (Daedalus); and several special purpose design procedures for constructing complex artifacts such as PLAs and microprocessor data paths. Coordinating all of these is a generalized property list data base which contains both the data representing circuits and the procedures for constructing them. The authors first review the nature of the data base and then turn to DPL and Daedalus, the two most common ways of entering information into the data base. The next two sections review the specialized procedures for constructing PLAs and data paths; the final section describes a tool for hierarchical node extraction. 5 references.

  10. VLSI 'smart' I/O module development

    Science.gov (United States)

    Kirk, Dan

    The developmental history, design, and operation of the MIL-STD-1553A/B discrete and serial module (DSM) for the U.S. Navy AN/AYK-14(V) avionics computer are described and illustrated with diagrams. The ongoing preplanned product improvement for the AN/AYK-14(V) includes five dual-redundant MIL-STD-1553 channels based on DSMs. The DSM is a front-end processor for transferring data to and from a common memory, sharing memory with a host processor to provide improved 'smart' input/output performance. Each DSM comprises three hardware sections: three VLSI-6000 semicustomized CMOS arrays, memory units to support the arrays, and buffers and resynchronization circuits. The DSM hardware module design, VLSI-6000 design tools, controlware and test software, and checkout procedures (using a hardware simulator) are characterized in detail.

  11. Design of a Low-Power VLSI Macrocell for Nonlinear Adaptive Video Noise Reduction

    Directory of Open Access Journals (Sweden)

    Sergio Saponara

    2004-09-01

    Full Text Available A VLSI macrocell for edge-preserving video noise reduction is proposed in the paper. It is based on a nonlinear rational filter enhanced by a noise estimator for blind and dynamic adaptation of the filtering parameters to the input signal statistics. The VLSI filter features a modular architecture allowing the extension of both mask size and filtering directions. Both spatial and spatiotemporal algorithms are supported. Simulation results with monochrome test videos prove its efficiency for many noise distributions with PSNR improvements up to 3.8 dB with respect to a nonadaptive solution. The VLSI macrocell has been realized in a 0.18 μm CMOS technology using a standard-cells library; it allows for real-time processing of main video formats, up to 30 fps (frames per second 4CIF, with a power consumption in the order of few mW.

  12. Techniques for Computing the DFT Using the Residue Fermat Number Systems and VLSI

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.

    1985-01-01

    The integer complex multiplier and adder over the direct sum of two copies of a finite field is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed for the DFT can be reduced substantially over the previous approach. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  13. vPELS: An E-Learning Social Environment for VLSI Design with Content Security Using DRM

    Science.gov (United States)

    Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn

    2014-01-01

    This article provides a proposal for personal e-learning system (vPELS [where "v" stands for VLSI: very large scale integrated circuit])) architecture in the context of social network environment for VLSI Design. The main objective of vPELS is to develop individual skills on a specific subject--say, VLSI--and share resources with peers.…

  14. The GLUEchip: A custom VLSI chip for detectors readout and associative memories circuits

    International Nuclear Information System (INIS)

    Amendolia, S.R.; Galeotti, S.; Morsani, F.; Passuello, D.; Ristori, L.; Turini, N.

    1993-01-01

    An associative memory full-custom VLSI chip for pattern recognition has been designed and tested in the past years. It's the AMchip, that contains 128 patterns of 60 bits each. To expand the pattern capacity of an Associative Memory bank, the custom VLSI GLUEchip has been developed. The GLUEchip allows the interconnection of up to 16 AMchips or up to 16 GLUEchips: the resulting tree-like structure works like a single AMchip with an output pipelined structure and a pattern capacity increased by a factor 16 for each GLUEchip used

  15. ORGANIZATION OF GRAPHIC INFORMATION FOR VIEWING THE MULTILAYER VLSI TOPOLOGY

    Directory of Open Access Journals (Sweden)

    V. I. Romanov

    2016-01-01

    Full Text Available One of the possible ways to reorganize of graphical information describing the set of topology layers of modern VLSI. The method is directed on the use in the conditions of the bounded size of video card memory. An additional effect, providing high performance of forming multi- image layout a multi-layer topology of modern VLSI, is achieved by preloading the required texture by means of auxiliary background process.

  16. Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-06-01

    Full Text Available Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational resources on selected FPGA development platform. This is because many other complex algorithms of an automated video surveillance system also run on the same platform. Keeping this key requirement as main focus, a memory efficient VLSI architecture for real-time motion detection and its implementation on FPGA platform is presented in this paper. This is accomplished by proposing a new memory efficient motion detection scheme and designing its VLSI architecture. The complete real-time motion detection system using the proposed memory efficient architecture along with proper input/output interfaces is implemented on Xilinx ML510 (Virtex-5 FX130T FPGA development platform and is capable of operating at 154.55 MHz clock frequency. Memory requirement of the proposed architecture is reduced by 41% compared to the standard clustering based motion detection architecture. The new memory efficient system robustly and automatically detects motion in real-world scenarios (both for the static backgrounds and the pseudo-stationary backgrounds in real-time for standard PAL (720 × 576 size color video.

  17. A multi coding technique to reduce transition activity in VLSI circuits

    International Nuclear Information System (INIS)

    Vithyalakshmi, N.; Rajaram, M.

    2014-01-01

    Advances in VLSI technology have enabled the implementation of complex digital circuits in a single chip, reducing system size and power consumption. In deep submicron low power CMOS VLSI design, the main cause of energy dissipation is charging and discharging of internal node capacitances due to transition activity. Transition activity is one of the major factors that also affect the dynamic power dissipation. This paper proposes power reduction analyzed through algorithm and logic circuit levels. In algorithm level the key aspect of reducing power dissipation is by minimizing transition activity and is achieved by introducing a data coding technique. So a novel multi coding technique is introduced to improve the efficiency of transition activity up to 52.3% on the bus lines, which will automatically reduce the dynamic power dissipation. In addition, 1 bit full adders are introduced in the Hamming distance estimator block, which reduces the device count. This coding method is implemented using Verilog HDL. The overall performance is analyzed by using Modelsim and Xilinx Tools. In total 38.2% power saving capability is achieved compared to other existing methods. (semiconductor technology)

  18. The VLSI handbook

    CERN Document Server

    Chen, Wai-Kai

    2007-01-01

    Written by a stellar international panel of expert contributors, this handbook remains the most up-to-date, reliable, and comprehensive source for real answers to practical problems. In addition to updated information in most chapters, this edition features several heavily revised and completely rewritten chapters, new chapters on such topics as CMOS fabrication and high-speed circuit design, heavily revised sections on testing of digital systems and design languages, and two entirely new sections on low-power electronics and VLSI signal processing. An updated compendium of references and othe

  19. Design of two easily-testable VLSI array multipliers

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, J.; Shen, J.P.

    1983-01-01

    Array multipliers are well-suited to VLSI implementation because of the regularity in their iterative structure. However, most VLSI circuits are very difficult to test. This paper shows that, with appropriate cell design, array multipliers can be designed to be very easily testable. An array multiplier is called c-testable if all its adder cells can be exhaustively tested while requiring only a constant number of test patterns. The testability of two well-known array multiplier structures are studied. The conventional design of the carry-save array multipler is shown to be not c-testable. However, a modified design, using a modified adder cell, is generated and shown to be c-testable and requires only 16 test patterns. Similar results are obtained for the baugh-wooley two's complement array multiplier. A modified design of the baugh-wooley array multiplier is shown to be c-testable and requires 55 test patterns. The implementation of a practical c-testable 16*16 array multiplier is also presented. 10 references.

  20. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  1. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    Science.gov (United States)

    2007-03-31

    IFinal 03/01/04 - 02/28/07 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Neuromorphic VLSI-based Bat Echolocation for Micro-aerial 5b.GRANTNUMBER Vehicle...uncovered interesting new issues in our choice for representing the intensity of signals. We have just finished testing the first chip version of an echo...timing-based algorithm (’openspace’) for sonar-guided navigation amidst multiple obstacles. 15. SUBJECT TERMS Neuromorphic VLSI, bat echolocation

  2. Parallel computation of nondeterministic algorithms in VLSI

    Energy Technology Data Exchange (ETDEWEB)

    Hortensius, P D

    1987-01-01

    This work examines parallel VLSI implementations of nondeterministic algorithms. It is demonstrated that conventional pseudorandom number generators are unsuitable for highly parallel applications. Efficient parallel pseudorandom sequence generation can be accomplished using certain classes of elementary one-dimensional cellular automata. The pseudorandom numbers appear in parallel on each clock cycle. Extensive study of the properties of these new pseudorandom number generators is made using standard empirical random number tests, cycle length tests, and implementation considerations. Furthermore, it is shown these particular cellular automata can form the basis of efficient VLSI architectures for computations involved in the Monte Carlo simulation of both the percolation and Ising models from statistical mechanics. Finally, a variation on a Built-In Self-Test technique based upon cellular automata is presented. These Cellular Automata-Logic-Block-Observation (CALBO) circuits improve upon conventional design for testability circuitry.

  3. VLSI Design with Alliance Free CAD Tools: an Implementation Example

    Directory of Open Access Journals (Sweden)

    Chávez-Bracamontes Ramón

    2015-07-01

    Full Text Available This paper presents the methodology used for a digital integrated circuit design that implements the communication protocol known as Serial Peripheral Interface, using the Alliance CAD System. The aim of this paper is to show how the work of VLSI design can be done by graduate and undergraduate students with minimal resources and experience. The physical design was sent to be fabricated using the CMOS AMI C5 process that features 0.5 micrometer in transistor size, sponsored by the MOSIS Educational Program. Tests were made on a platform that transfers data from inertial sensor measurements to the designed SPI chip, which in turn sends the data back on a parallel bus to a common microcontroller. The results show the efficiency of the employed methodology in VLSI design, as well as the feasibility of ICs manufacturing from school projects that have insufficient or no source of funding

  4. A second generation 50 Mbps VLSI level zero processing system prototype

    Science.gov (United States)

    Harris, Jonathan C.; Shi, Jeff; Speciale, Nick; Bennett, Toby

    1994-01-01

    Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies.

  5. VLSI Design of Trusted Virtual Sensors

    Directory of Open Access Journals (Sweden)

    Macarena C. Martínez-Rodríguez

    2018-01-01

    Full Text Available This work presents a Very Large Scale Integration (VLSI design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF based on a Static Random Access Memory (SRAM to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time.

  6. VLSI Design of Trusted Virtual Sensors.

    Science.gov (United States)

    Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada

    2018-01-25

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).

  7. VLSI Architectures for the Multiplication of Integers Modulo a Fermat Number

    Science.gov (United States)

    Chang, J. J.; Truong, T. K.; Reed, I. S.; Hsu, I. S.

    1984-01-01

    Multiplication is central in the implementation of Fermat number transforms and other residue number algorithms. There is need for a good multiplication algorithm that can be realized easily on a very large scale integration (VLSI) chip. The Leibowitz multiplier is modified to realize multiplication in the ring of integers modulo a Fermat number. This new algorithm requires only a sequence of cyclic shifts and additions. The designs developed for this new multiplier are regular, simple, expandable, and, therefore, suitable for VLSI implementation.

  8. A VLSI image processor via pseudo-mersenne transforms

    International Nuclear Information System (INIS)

    Sei, W.J.; Jagadeesh, J.M.

    1986-01-01

    The computational burden on image processing in medical fields where a large amount of information must be processed quickly and accurately has led to consideration of special-purpose image processor chip design for some time. The very large scale integration (VLSI) resolution has made it cost-effective and feasible to consider the design of special purpose chips for medical imaging fields. This paper describes a VLSI CMOS chip suitable for parallel implementation of image processing algorithms and cyclic convolutions by using Pseudo-Mersenne Number Transform (PMNT). The main advantages of the PMNT over the Fast Fourier Transform (FFT) are: (1) no multiplications are required; (2) integer arithmetic is used. The design and development of this processor, which operates on 32-point convolution or 5 x 5 window image, are described

  9. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam; Ghoneim, Mohamed T.; El Boghdady, Nawal; Halawa, Sarah; Iskander, Sophinese M.; Anis, Mohab H.

    2011-01-01

    -designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result

  10. Synthesis of on-chip control circuits for mVLSI biochips

    DEFF Research Database (Denmark)

    Potluri, Seetal; Schneider, Alexander Rüdiger; Hørslev-Petersen, Martin

    2017-01-01

    them to laboratory environments. To address this issue, researchers have proposed methods to reduce the number of offchip pressure sources, through integration of on-chip pneumatic control logic circuits fabricated using three-layer monolithic membrane valve technology. Traditionally, mVLSI biochip......-chip control circuit design and (iii) the integration of on-chip control in the placement and routing design tasks. In this paper we present a design methodology for logic synthesis and physical synthesis of mVLSI biochips that use on-chip control. We show how the proposed methodology can be successfully...... applied to generate biochip layouts with integrated on-chip pneumatic control....

  11. Emerging Applications for High K Materials in VLSI Technology

    Science.gov (United States)

    Clark, Robert D.

    2014-01-01

    The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI) manufacturing for leading edge Dynamic Random Access Memory (DRAM) and Complementary Metal Oxide Semiconductor (CMOS) applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM) diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD) is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing. PMID:28788599

  12. Emerging Applications for High K Materials in VLSI Technology

    Directory of Open Access Journals (Sweden)

    Robert D. Clark

    2014-04-01

    Full Text Available The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI manufacturing for leading edge Dynamic Random Access Memory (DRAM and Complementary Metal Oxide Semiconductor (CMOS applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing.

  13. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam

    2011-12-01

    Power dissipation poses a great challenge for VLSI designers. With the intense down-scaling of technology, the total power consumption of the chip is made up primarily of leakage power dissipation. This paper proposes combining a custom-designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result of implementing this novel power gating technique, a standby leakage power reduction of 99% and energy savings of 33.3% are achieved. Finally the possible effects of surge currents and ground bounce noise are studied. These findings allow longer operation times for battery-operated systems characterized by long standby periods. © 2011 IEEE.

  14. Harnessing VLSI System Design with EDA Tools

    CERN Document Server

    Kamat, Rajanish K; Gaikwad, Pawan K; Guhilot, Hansraj

    2012-01-01

    This book explores various dimensions of EDA technologies for achieving different goals in VLSI system design. Although the scope of EDA is very broad and comprises diversified hardware and software tools to accomplish different phases of VLSI system design, such as design, layout, simulation, testability, prototyping and implementation, this book focuses only on demystifying the code, a.k.a. firmware development and its implementation with FPGAs. Since there are a variety of languages for system design, this book covers various issues related to VHDL, Verilog and System C synergized with EDA tools, using a variety of case studies such as testability, verification and power consumption. * Covers aspects of VHDL, Verilog and Handel C in one text; * Enables designers to judge the appropriateness of each EDA tool for relevant applications; * Omits discussion of design platforms and focuses on design case studies; * Uses design case studies from diversified application domains such as network on chip, hospital on...

  15. Development methods for VLSI-processors

    International Nuclear Information System (INIS)

    Horninger, K.; Sandweg, G.

    1982-01-01

    The aim of this project, which was originally planed for 3 years, was the development of modern system and circuit concepts, for VLSI-processors having a 32 bit wide data path. The result of this first years work is the concept of a general purpose processor. This processor is not only logically but also physically (on the chip) divided into four functional units: a microprogrammable instruction unit, an execution unit in slice technique, a fully associative cache memory and an I/O unit. For the ALU of the execution unit circuits in PLA and slice techniques have been realized. On the basis of regularity, area consumption and achievable performance the slice technique has been prefered. The designs utilize selftesting circuitry. (orig.) [de

  16. Drift chamber tracking with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-10-01

    We have tested a commercial analog VLSI neural network chip for finding in real time the intercept and slope of charged particles traversing a drift chamber. Voltages proportional to the drift times were input to the Intel ETANN chip and the outputs were recorded and later compared off line to conventional track fits. We will discuss the chamber and test setup, the chip specifications, and results of recent tests. We'll briefly discuss possible applications in high energy physics detector triggers

  17. Embedded Processor Based Automatic Temperature Control of VLSI Chips

    Directory of Open Access Journals (Sweden)

    Narasimha Murthy Yayavaram

    2009-01-01

    Full Text Available This paper presents embedded processor based automatic temperature control of VLSI chips, using temperature sensor LM35 and ARM processor LPC2378. Due to the very high packing density, VLSI chips get heated very soon and if not cooled properly, the performance is very much affected. In the present work, the sensor which is kept very near proximity to the IC will sense the temperature and the speed of the fan arranged near to the IC is controlled based on the PWM signal generated by the ARM processor. A buzzer is also provided with the hardware, to indicate either the failure of the fan or overheating of the IC. The entire process is achieved by developing a suitable embedded C program.

  18. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  19. Trace-based post-silicon validation for VLSI circuits

    CERN Document Server

    Liu, Xiao

    2014-01-01

    This book first provides a comprehensive coverage of state-of-the-art validation solutions based on real-time signal tracing to guarantee the correctness of VLSI circuits.  The authors discuss several key challenges in post-silicon validation and provide automated solutions that are systematic and cost-effective.  A series of automatic tracing solutions and innovative design for debug (DfD) techniques are described, including techniques for trace signal selection for enhancing visibility of functional errors, a multiplexed signal tracing strategy for improving functional error detection, a tracing solution for debugging electrical errors, an interconnection fabric for increasing data bandwidth and supporting multi-core debug, an interconnection fabric design and optimization technique to increase transfer flexibility and a DfD design and associated tracing solution for improving debug efficiency and expanding tracing window. The solutions presented in this book improve the validation quality of VLSI circuit...

  20. A Knowledge Based Approach to VLSI CAD

    Science.gov (United States)

    1983-09-01

    Avail-and/or Dist ISpecial L| OI. SEICURITY CLASIIrCATION OP THIS IPA.lErllm S Daene." A KNOwLEDE BASED APPROACH TO VLSI CAD’ Louis L Steinberg and...major issues lies in building up and managing the knowledge base of oesign expertise. We expect that, as with many recent expert systems, in order to

  1. A multichip aVLSI system emulating orientation selectivity of primary visual cortical cells.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2005-07-01

    In this paper, we designed and fabricated a multichip neuromorphic analog very large scale integrated (aVLSI) system, which emulates the orientation selective response of the simple cell in the primary visual cortex. The system consists of a silicon retina and an orientation chip. An image, which is filtered by a concentric center-surround (CS) antagonistic receptive field of the silicon retina, is transferred to the orientation chip. The image transfer from the silicon retina to the orientation chip is carried out with analog signals. The orientation chip selectively aggregates multiple pixels of the silicon retina, mimicking the feedforward model proposed by Hubel and Wiesel. The chip provides the orientation-selective (OS) outputs which are tuned to 0 degrees, 60 degrees, and 120 degrees. The feed-forward aggregation reduces the fixed pattern noise that is due to the mismatch of the transistors in the orientation chip. The spatial properties of the orientation selective response were examined in terms of the adjustable parameters of the chip, i.e., the number of aggregated pixels and size of the receptive field of the silicon retina. The multichip aVLSI architecture used in the present study can be applied to implement higher order cells such as the complex cell of the primary visual cortex.

  2. Characterizations and computational complexity of systolic trellis automata

    Energy Technology Data Exchange (ETDEWEB)

    Ibarra, O H; Kim, S M

    1984-03-01

    Systolic trellis automata are simple models for VLSI. The authors characterize the computing power of these models in terms of turing machines. The characterizations are useful in proving new results as well as giving simpler proofs of known results. They also derive lower and upper bounds on the computational complexity of the models. 18 references.

  3. Advanced symbolic analysis for VLSI systems methods and applications

    CERN Document Server

    Shi, Guoyong; Tlelo Cuautle, Esteban

    2014-01-01

    This book provides comprehensive coverage of the recent advances in symbolic analysis techniques for design automation of nanometer VLSI systems. The presentation is organized in parts of fundamentals, basic implementation methods and applications for VLSI design. Topics emphasized include  statistical timing and crosstalk analysis, statistical and parallel analysis, performance bound analysis and behavioral modeling for analog integrated circuits . Among the recent advances, the Binary Decision Diagram (BDD) based approaches are studied in depth. The BDD-based hierarchical symbolic analysis approaches, have essentially broken the analog circuit size barrier. In particular, this book   • Provides an overview of classical symbolic analysis methods and a comprehensive presentation on the modern  BDD-based symbolic analysis techniques; • Describes detailed implementation strategies for BDD-based algorithms, including the principles of zero-suppression, variable ordering and canonical reduction; • Int...

  4. UW VLSI chip tester

    Science.gov (United States)

    McKenzie, Neil

    1989-12-01

    We present a design for a low-cost, functional VLSI chip tester. It is based on the Apple MacIntosh II personal computer. It tests chips that have up to 128 pins. All pin drivers of the tester are bidirectional; each pin is programmed independently as an input or an output. The tester can test both static and dynamic chips. Rudimentary speed testing is provided. Chips are tested by executing C programs written by the user. A software library is provided for program development. Tests run under both the Mac Operating System and A/UX. The design is implemented using Xilinx Logic Cell Arrays. Price/performance tradeoffs are discussed.

  5. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  6. VLSI structures for track finding

    International Nuclear Information System (INIS)

    Dell'Orso, M.

    1989-01-01

    We discuss the architecture of a device based on the concept of associative memory designed to solve the track finding problem, typical of high energy physics experiments, in a time span of a few microseconds even for very high multiplicity events. This ''machine'' is implemented as a large array of custom VLSI chips. All the chips are equal and each of them stores a number of ''patterns''. All the patterns in all the chips are compared in parallel to the data coming from the detector while the detector is being read out. (orig.)

  7. Digital VLSI design with Verilog a textbook from Silicon Valley Technical Institute

    CERN Document Server

    Williams, John

    2008-01-01

    This unique textbook is structured as a step-by-step course of study along the lines of a VLSI IC design project. In a nominal schedule of 12 weeks, two days and about 10 hours per week, the entire verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer - deserializer, including synthesizable PLLs. Digital VLSI Design With Verilog is all an engineer needs for in-depth understanding of the verilog language: Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided on the

  8. VLSI architecture and design for the Fermat Number Transform implementation

    Energy Technology Data Exchange (ETDEWEB)

    Pajayakrit, A.

    1987-01-01

    A new technique of sectioning a pipelined transformer, using the Fermat Number Transform (FNT), is introduced. Also, a novel VLSI design which overcomes the problems of implementing FNTs, for use in fast convolution/correlation, is described. The design comprises one complete section of a pipelined transformer and may be programmed to function at any point in a forward or inverse pipeline, so allowing the construction of a pipelined convolver or correlator using identical chips, thus the favorable properties of the transform can be exploited. This overcomes the difficulty of fitting a complete pipeline onto one chip without resorting to the use of several different designs. The implementation of high-speed convolver/correlator using the VLSI chips has been successfully developed and tested. For impulse response lengths of up to 16 points the sampling rates of 0.5 MHz can be achieved. Finally, the filter speed performance using the FNT chips is compared to other designs and conclusions drawn on the merits of the FNT for this application. Also, the advantages and limitations of the FNT are analyzed, with respect to the more conventional FFT, and the results are provided.

  9. Numerical analysis of electromigration in thin film VLSI interconnections

    NARCIS (Netherlands)

    Petrescu, V.; Mouthaan, A.J.; Schoenmaker, W.; Angelescu, S.; Vissarion, R.; Dima, G.; Wallinga, Hans; Profirescu, M.D.

    1995-01-01

    Due to the continuing downscaling of the dimensions in VLSI circuits, electromigration is becoming a serious reliability hazard. A software tool based on finite element analysis has been developed to solve the two partial differential equations of the two particle vacancy/imperfection model.

  10. Physico-topological methods of increasing stability of the VLSI circuit components to irradiation. Fiziko-topologhicheskie sposoby uluchsheniya radiatsionnoj stojkosti komponentov BIS

    Energy Technology Data Exchange (ETDEWEB)

    Pereshenkov, V S [MIFI, Moscow, (Russian Federation); Shishianu, F S; Rusanovskij, V I [S. Lazo KPI, Chisinau, (Moldova, Republic of)

    1992-01-01

    The paper presents the method used and the experimental results obtained for 8-bit microprocessor irradiated with [gamma]-rays and neutrons. The correlation between the electrical and technological parameters with the irradiation ones is revealed. The influence of leakage current between devices incorporated in VLSI circuits was studied. The obtained results create the possibility to determine the technological parameters necessary for designing the circuit able to work at predetermined doses. The necessary substrate doping concentration for isolation which eliminates the leakage current between devices prevents the VLSI circuit break down was determined. (Author).

  11. Heavy ion tests on programmable VLSI

    International Nuclear Information System (INIS)

    Provost-Grellier, A.

    1989-11-01

    The radiation from space environment induces operation damages in onboard computers systems. The definition of a strategy, for the Very Large Scale Integrated Circuitry (VLSI) qualification and choice, is needed. The 'upset' phenomena is known to be the most critical integrated circuit radiation effect. The strategies for testing integrated circuits are reviewed. A method and a test device were developed and applied to space applications candidate circuits. Cyclotron, synchrotron and Californium source experiments were carried out [fr

  12. VLSI System Implementation of 200 MHz, 8-bit, 90nm CMOS Arithmetic and Logic Unit (ALU Processor Controller

    Directory of Open Access Journals (Sweden)

    Fazal NOORBASHA

    2012-08-01

    Full Text Available In this present study includes the Very Large Scale Integration (VLSI system implementation of 200MHz, 8-bit, 90nm Complementary Metal Oxide Semiconductor (CMOS Arithmetic and Logic Unit (ALU processor control with logic gate design style and 0.12µm six metal 90nm CMOS fabrication technology. The system blocks and the behaviour are defined and the logical design is implemented in gate level in the design phase. Then, the logic circuits are simulated and the subunits are converted in to 90nm CMOS layout. Finally, in order to construct the VLSI system these units are placed in the floor plan and simulated with analog and digital, logic and switch level simulators. The results of the simulations indicates that the VLSI system can control different instructions which can divided into sub groups: transfer instructions, arithmetic and logic instructions, rotate and shift instructions, branch instructions, input/output instructions, control instructions. The data bus of the system is 16-bit. It runs at 200MHz, and operating power is 1.2V. In this paper, the parametric analysis of the system, the design steps and obtained results are explained.

  13. Communication Complexity A treasure house of lower bounds

    Indian Academy of Sciences (India)

    Prahladh Harsha TIFR

    Applications. Data structures, VLSI design, time-space tradeoffs, circuit complexity, streaming, auctions, combinatorial optimization . . . Randomized Communication Complexity of INTER: Ω(n). ▷ There is no parallelizable monotone circuit that computes a matching in a given graph ...

  14. VLSI Technology for Cognitive Radio

    Science.gov (United States)

    VIJAYALAKSHMI, B.; SIDDAIAH, P.

    2017-08-01

    One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.

  15. Applications of VLSI circuits to medical imaging

    International Nuclear Information System (INIS)

    O'Donnell, M.

    1988-01-01

    In this paper the application of advanced VLSI circuits to medical imaging is explored. The relationship of both general purpose signal processing chips and custom devices to medical imaging is discussed using examples of fabricated chips. In addition, advanced CAD tools for silicon compilation are presented. Devices built with these tools represent a possible alternative to custom devices and general purpose signal processors for the next generation of medical imaging systems

  16. VLSI Design of SVM-Based Seizure Detection System With On-Chip Learning Capability.

    Science.gov (United States)

    Feng, Lichen; Li, Zunchao; Wang, Yuanfa

    2018-02-01

    Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time-frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.

  17. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    Science.gov (United States)

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  18. VLSI top-down design based on the separation of hierarchies

    NARCIS (Netherlands)

    Spaanenburg, L.; Broekema, A.; Leenstra, J.; Huys, C.

    1986-01-01

    Despite the presence of structure, interactions between the three views on VLSI design still lead to lengthy iterations. By separating the hierarchies for the respective views, the interactions are reduced. This separated hierarchy allows top-down design with functional abstractions as exemplified

  19. Development of Radhard VLSI electronics for SSC calorimeters

    International Nuclear Information System (INIS)

    Dawson, J.W.; Nodulman, L.J.

    1989-01-01

    A new program of development of integrated electronics for liquid argon calorimeters in the SSC detector environment is being started at Argonne National Laboratory. Scientists from Brookhaven National Laboratory and Vanderbilt University together with an industrial participants are expected to collaborate in this work. Interaction rates, segmentation, and the radiation environment dictate that front-end electronics of SSC calorimeters must be implemented in the form of highly integrated, radhard, analog, low noise, VLSI custom monolithic devices. Important considerations are power dissipation, choice of functions integrated on the front-end chips, and cabling requirements. An extensive level of expertise in radhard electronics exists within the industrial community, and a primary objective of this work is to bring that expertise to bear on the problems of SSC detector design. Radiation hardness measurements and requirements as well as calorimeter design will be primarily the responsibility of Argonne scientists and our Brookhaven and Vanderbilt colleagues. Radhard VLSI design and fabrication will be primarily the industrial participant's responsibility. The rapid-cycling synchrotron at Argonne will be used for radiation damage studies involving response to neutrons and charged particles, while damage from gammas will be investigated at Brookhaven. 10 refs., 6 figs., 2 tabs

  20. The test of VLSI circuits

    Science.gov (United States)

    Baviere, Ph.

    Tests which have proven effective for evaluating VLSI circuits for space applications are described. It is recommended that circuits be examined after each manfacturing step to gain fast feedback on inadequacies in the production system. Data from failure modes which occur during operational lifetimes of circuits also permit redefinition of the manufacturing and quality control process to eliminate the defects identified. Other tests include determination of the operational envelope of the circuits, examination of the circuit response to controlled inputs, and the performance and functional speeds of ROM and RAM memories. Finally, it is desirable that all new circuits be designed with testing in mind.

  1. An analog VLSI real time optical character recognition system based on a neural architecture

    International Nuclear Information System (INIS)

    Bo, G.; Caviglia, D.; Valle, M.

    1999-01-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system

  2. An analog VLSI real time optical character recognition system based on a neural architecture

    Energy Technology Data Exchange (ETDEWEB)

    Bo, G.; Caviglia, D.; Valle, M. [Genoa Univ. (Italy). Dip. of Biophysical and Electronic Engineering

    1999-03-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system.

  3. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  4. A novel configurable VLSI architecture design of window-based image processing method

    Science.gov (United States)

    Zhao, Hui; Sang, Hongshi; Shen, Xubang

    2018-03-01

    Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure

  5. Multi-net optimization of VLSI interconnect

    CERN Document Server

    Moiseev, Konstantin; Wimer, Shmuel

    2015-01-01

    This book covers layout design and layout migration methodologies for optimizing multi-net wire structures in advanced VLSI interconnects. Scaling-dependent models for interconnect power, interconnect delay and crosstalk noise are covered in depth, and several design optimization problems are addressed, such as minimization of interconnect power under delay constraints, or design for minimal delay in wire bundles within a given routing area. A handy reference or a guide for design methodologies and layout automation techniques, this book provides a foundation for physical design challenges of interconnect in advanced integrated circuits.  • Describes the evolution of interconnect scaling and provides new techniques for layout migration and optimization, focusing on multi-net optimization; • Presents research results that provide a level of design optimization which does not exist in commercially-available design automation software tools; • Includes mathematical properties and conditions for optimal...

  6. High performance VLSI telemetry data systems

    Science.gov (United States)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  7. Design of 10Gbps optical encoder/decoder structure for FE-OCDMA system using SOA and opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Hwang, Seow; Alameh, Kamal

    2008-01-21

    In this paper we propose and experimentally demonstrate a reconfigurable 10Gbps frequency-encoded (1D) encoder/decoder structure for optical code division multiple access (OCDMA). The encoder is constructed using a single semiconductor optical amplifier (SOA) and 1D reflective Opto-VLSI processor. The SOA generates broadband amplified spontaneous emission that is dynamically sliced using digital phase holograms loaded onto the Opto-VLSI processor to generate 1D codewords. The selected wavelengths are injected back into the same SOA for amplifications. The decoder is constructed using single Opto-VLSI processor only. The encoded signal can successfully be retrieved at the decoder side only when the digital phase holograms of the encoder and the decoder are matched. The system performance is measured in terms of the auto-correlation and cross-correlation functions as well as the eye diagram.

  8. Analog VLSI Models of Range-Tuned Neurons in the Bat Echolocation System

    Directory of Open Access Journals (Sweden)

    Horiuchi Timothy

    2003-01-01

    Full Text Available Bat echolocation is a fascinating topic of research for both neuroscientists and engineers, due to the complex and extremely time-constrained nature of the problem and its potential for application to engineered systems. In the bat's brainstem and midbrain exist neural circuits that are sensitive to the specific difference in time between the outgoing sonar vocalization and the returning echo. While some of the details of the neural mechanisms are known to be species-specific, a basic model of reafference-triggered, postinhibitory rebound timing is reasonably well supported by available data. We have designed low-power, analog VLSI circuits to mimic this mechanism and have demonstrated range-dependent outputs for use in a real-time sonar system. These circuits are being used to implement range-dependent vocalization amplitude, vocalization rate, and closest target isolation.

  9. An engineering methodology for implementing and testing VLSI (Very Large Scale Integrated) circuits

    Science.gov (United States)

    Corliss, Walter F., II

    1989-03-01

    The engineering methodology for producing a fully tested VLSI chip from a design layout is presented. A 16-bit correlator, NPS CORN88, that was previously designed, was used as a vehicle to demonstrate this methodology. The study of the design and simulation tools, MAGIC and MOSSIM II, was the focus of the design and validation process. The design was then implemented and the chip was fabricated by MOSIS. This fabricated chip was then used to develop a testing methodology for using the digital test facilities at NPS. NPS CORN88 was the first full custom VLSI chip, designed at NPS, to be tested with the NPS digital analysis system, Tektronix DAS 9100 series tester. The capabilities and limitations of these test facilities are examined. NPS CORN88 test results are included to demonstrate the capabilities of the digital test system. A translator, MOS2DAS, was developed to convert the MOSSIM II simulation program to the input files required by the DAS 9100 device verification software, 91DVS. Finally, a tutorial for using the digital test facilities, including the DAS 9100 and associated support equipments, is included as an appendix.

  10. Assimilation of Biophysical Neuronal Dynamics in Neuromorphic VLSI.

    Science.gov (United States)

    Wang, Jun; Breen, Daniel; Akinin, Abraham; Broccard, Frederic; Abarbanel, Henry D I; Cauwenberghs, Gert

    2017-12-01

    Representing the biophysics of neuronal dynamics and behavior offers a principled analysis-by-synthesis approach toward understanding mechanisms of nervous system functions. We report on a set of procedures assimilating and emulating neurobiological data on a neuromorphic very large scale integrated (VLSI) circuit. The analog VLSI chip, NeuroDyn, features 384 digitally programmable parameters specifying for 4 generalized Hodgkin-Huxley neurons coupled through 12 conductance-based chemical synapses. The parameters also describe reversal potentials, maximal conductances, and spline regressed kinetic functions for ion channel gating variables. In one set of experiments, we assimilated membrane potential recorded from one of the neurons on the chip to the model structure upon which NeuroDyn was designed using the known current input sequence. We arrived at the programmed parameters except for model errors due to analog imperfections in the chip fabrication. In a related set of experiments, we replicated songbird individual neuron dynamics on NeuroDyn by estimating and configuring parameters extracted using data assimilation from intracellular neural recordings. Faithful emulation of detailed biophysical neural dynamics will enable the use of NeuroDyn as a tool to probe electrical and molecular properties of functional neural circuits. Neuroscience applications include studying the relationship between molecular properties of neurons and the emergence of different spike patterns or different brain behaviors. Clinical applications include studying and predicting effects of neuromodulators or neurodegenerative diseases on ion channel kinetics.

  11. Operation of a Fast-RICH Prototype with VLSI readout electronics

    Energy Technology Data Exchange (ETDEWEB)

    Guyonnet, J.L. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Arnold, R. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Jobez, J.P. (Coll. de France, 75 - Paris (France)); Seguinot, J. (Coll. de France, 75 - Paris (France)); Ypsilantis, T. (Coll. de France, 75 - Paris (France)); Chesi, E. (CERN / ECP Div., Geneve (Switzerland)); Racz, A. (CERN / ECP Div., Geneve (Switzerland)); Egger, J. (Paul Scherrer Inst., Villigen (Switzerland)); Gabathuler, K. (Paul Scherrer Inst., Villigen (Switzerland)); Joram, C. (Karlsruhe Univ. (Germany)); Adachi, I. (KEK, Tsukuba (Japan)); Enomoto, R. (KEK, Tsukuba (Japan)); Sumiyoshi, T. (KEK, Tsukuba (Japan))

    1994-04-01

    We discuss the first test results, obtained with cosmic rays, of a full-scale Fast-RICH Prototype with proximity-focused 10 mm thick LiF (CaF[sub 2]) solid radiators, TEA as photosensor in CH[sub 4], and readout of 12 x 10[sup 3] cathode pads (5.334 x 6.604 mm[sup 2]) using dedicated VLSI electronics we have developed. The number of detected photoelectrons is 7.7 (6.9) for the CaF[sub 2] (LiF) radiator, very near to the expected values 6.4 (7.5) from Monte Carlo simulations. The single-photon Cherenkov angle resolution [sigma][sub [theta

  12. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  13. High-energy heavy ion testing of VLSI devices for single event ...

    Indian Academy of Sciences (India)

    Unknown

    per describes the high-energy heavy ion radiation testing of VLSI devices for single event upset (SEU) ... The experimental set up employed to produce low flux of heavy ions viz. silicon ... through which they pass, leaving behind a wake of elec- ... for use in Bus Management Unit (BMU) and bulk CMOS ... was scheduled.

  14. The AMchip: A VLSI associative memory for track finding

    International Nuclear Information System (INIS)

    Morsani, F.; Galeotti, S.; Passuello, D.; Amendolia, S.R.; Ristori, L.; Turini, N.

    1992-01-01

    An associative memory to be used for super-fast track finding in future high energy physics experiments, has been implemented on silicon as a full-custom CMOS VLSI chip (the AMchip). The first prototype has been designed and successfully tested at INFN in Pisa. It is implemented in 1.6 μm, double metal, silicon gate CMOS technology and contains about 140 000 MOS transistors on a 1x1 cm 2 silicon chip. (orig.)

  15. Point DCT VLSI Architecture for Emerging HEVC Standard

    OpenAIRE

    Ahmed, Ashfaq; Shahid, Muhammad Usman; Rehman, Ata ur

    2012-01-01

    This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4 × 4 up to 3 2 × 3 2 , the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into ...

  16. Formal verification an essential toolkit for modern VLSI design

    CERN Document Server

    Seligman, Erik; Kumar, M V Achutha Kiran

    2015-01-01

    Formal Verification: An Essential Toolkit for Modern VLSI Design presents practical approaches for design and validation, with hands-on advice for working engineers integrating these techniques into their work. Building on a basic knowledge of System Verilog, this book demystifies FV and presents the practical applications that are bringing it into mainstream design and validation processes at Intel and other companies. The text prepares readers to effectively introduce FV in their organization and deploy FV techniques to increase design and validation productivity. Presents formal verific

  17. Synthesis algorithm of VLSI multipliers for ASIC

    Science.gov (United States)

    Chua, O. H.; Eldin, A. G.

    1993-01-01

    Multipliers are critical sub-blocks in ASIC design, especially for digital signal processing and communications applications. A flexible multiplier synthesis tool is developed which is capable of generating multiplier blocks for word size in the range of 4 to 256 bits. A comparison of existing multiplier algorithms is made in terms of speed, silicon area, and suitability for automated synthesis and verification of its VLSI implementation. The algorithm divides the range of supported word sizes into sub-ranges and provides each sub-range with a specific multiplier architecture for optimal speed and area. The algorithm of the synthesis tool and the multiplier architectures are presented. Circuit implementation and the automated synthesis methodology are discussed.

  18. An area-efficient path memory structure for VLSI Implementation of high speed Viterbi decoders

    DEFF Research Database (Denmark)

    Paaske, Erik; Pedersen, Steen; Sparsø, Jens

    1991-01-01

    Path storage and selection methods for Viterbi decoders are investigated with special emphasis on VLSI implementations. Two well-known algorithms, the register exchange, algorithm, REA, and the trace back algorithm, TBA, are considered. The REA requires the smallest number of storage elements...

  19. CAPCAL, 3-D Capacitance Calculator for VLSI Purposes

    International Nuclear Information System (INIS)

    Seidl, Albert; Klose, Helmut; Svoboda, Mildos

    2004-01-01

    1 - Description of program or function: CAPCAL is devoted to the calculation of capacitances of three-dimensional wiring configurations are typically used in VLSI circuits. Due to analogies in the mathematical description also conductance and heat transport problems can be treated by CAPCAL. To handle the problem using CAPCAL same approximations have to be applied to the structure under investigation: - the overall geometry has to be confined to a finite domain by using symmetry-properties of the problem - Non-rectangular structures have to be simplified into an artwork of multiple boxes. 2 - Method of solution: The electrical field is described by the Laplace-equation. The differential equation is discretized by using the finite difference method. NEA-1327/01: The linear equation system is solved by using a combined ADI-multigrid method. NEA-1327/04: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. NEA-1327/05: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. 3 - Restrictions on the complexity of the problem: NEA-1327/01: Certain restrictions of use may arise from the dimensioning of arrays. Field lengths are defined via PARAMETER-statements which can easily by modified. If the geometry of the problem is defined such that Neumann boundaries are dominating the convergence of the iterative equation system solver is affected

  20. Fast-prototyping of VLSI

    International Nuclear Information System (INIS)

    Saucier, G.; Read, E.

    1987-01-01

    Fast-prototyping will be a reality in the very near future if both straightforward design methods and fast manufacturing facilities are available. This book focuses, first, on the motivation for fast-prototyping. Economic aspects and market considerations are analysed by European and Japanese companies. In the second chapter, new design methods are identified, mainly for full custom circuits. Of course, silicon compilers play a key role and the introduction of artificial intelligence techniques sheds a new light on the subject. At present, fast-prototyping on gate arrays or on standard cells is the most conventional technique and the third chapter updates the state-of-the art in this area. The fourth chapter concentrates specifically on the e-beam direct-writing for submicron IC technologies. In the fifth chapter, a strategic point in fast-prototyping, namely the test problem is addressed. The design for testability and the interface to the test equipment are mandatory to fulfill the test requirement for fast-prototyping. Finally, the last chapter deals with the subject of education when many people complain about the lack of use of fast-prototyping in higher education for VLSI

  1. An Asynchronous Low Power and High Performance VLSI Architecture for Viterbi Decoder Implemented with Quasi Delay Insensitive Templates

    Directory of Open Access Journals (Sweden)

    T. Kalavathi Devi

    2015-01-01

    Full Text Available Convolutional codes are comprehensively used as Forward Error Correction (FEC codes in digital communication systems. For decoding of convolutional codes at the receiver end, Viterbi decoder is often used to have high priority. This decoder meets the demand of high speed and low power. At present, the design of a competent system in Very Large Scale Integration (VLSI technology requires these VLSI parameters to be finely defined. The proposed asynchronous method focuses on reducing the power consumption of Viterbi decoder for various constraint lengths using asynchronous modules. The asynchronous designs are based on commonly used Quasi Delay Insensitive (QDI templates, namely, Precharge Half Buffer (PCHB and Weak Conditioned Half Buffer (WCHB. The functionality of the proposed asynchronous design is simulated and verified using Tanner Spice (TSPICE in 0.25 µm, 65 nm, and 180 nm technologies of Taiwan Semiconductor Manufacture Company (TSMC. The simulation result illustrates that the asynchronous design techniques have 25.21% of power reduction compared to synchronous design and work at a speed of 475 MHz.

  2. Implementation of a VLSI Level Zero Processing system utilizing the functional component approach

    Science.gov (United States)

    Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.

    1991-01-01

    A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.

  3. Use of complex electronic equipment within radiative areas of PWR power plants: feability study

    International Nuclear Information System (INIS)

    Fremont, P.; Carquet, M.

    1988-01-01

    EDF has undertaken a study in order to evaluate the technical and economical feasibility of using complex electronic equipment within radiative areas of PWR power plants. This study lies on tests of VLSI components (Random Access Memories) under gamma rays irradiations, which aims are to evaluate the radiation dose that they can withstand and to develop a selection method. 125 rad/h and 16 rad/h tests results are given [fr

  4. International Conference on VLSI, Communication, Advanced Devices, Signals & Systems and Networking

    CERN Document Server

    Shirur, Yasha; Prasad, Rekha

    2013-01-01

    This book is a collection of papers presented by renowned researchers, keynote speakers and academicians in the International Conference on VLSI, Communication, Analog Designs, Signals and Systems, and Networking (VCASAN-2013), organized by B.N.M. Institute of Technology, Bangalore, India during July 17-19, 2013. The book provides global trends in cutting-edge technologies in electronics and communication engineering. The content of the book is useful to engineers, researchers and academicians as well as industry professionals.

  5. Built-in self-repair of VLSI memories employing neural nets

    Science.gov (United States)

    Mazumder, Pinaki

    1998-10-01

    The decades of the Eighties and the Nineties have witnessed the spectacular growth of VLSI technology, when the chip size has increased from a few hundred devices to a staggering multi-millon transistors. This trend is expected to continue as the CMOS feature size progresses towards the nanometric dimension of 100 nm and less. SIA roadmap projects that, where as the DRAM chips will integrate over 20 billion devices in the next millennium, the future microprocessors may incorporate over 100 million transistors on a single chip. As the VLSI chip size increase, the limited accessibility of circuit components poses great difficulty for external diagnosis and replacement in the presence of faulty components. For this reason, extensive work has been done in built-in self-test techniques, but little research is known concerning built-in self-repair. Moreover, the extra hardware introduced by conventional fault-tolerance techniques is also likely to become faulty, therefore causing the circuit to be useless. This research demonstrates the feasibility of implementing electronic neural networks as intelligent hardware for memory array repair. Most importantly, we show that the neural network control possesses a robust and degradable computing capability under various fault conditions. Overall, a yield analysis performed on 64K DRAM's shows that the yield can be improved from as low as 20 percent to near 99 percent due to the self-repair design, with overhead no more than 7 percent.

  6. PERFORMANCE OF LEAKAGE POWER MINIMIZATION TECHNIQUE FOR CMOS VLSI TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    T. Tharaneeswaran

    2012-06-01

    Full Text Available Leakage power of CMOS VLSI Technology is a great concern. To reduce leakage power in CMOS circuits, a Leakage Power Minimiza-tion Technique (LPMT is implemented in this paper. Leakage cur-rents are monitored and compared. The Comparator kicks the charge pump to give body voltage (Vbody. Simulations of these circuits are done using TSMC 0.35µm technology with various operating temper-atures. Current steering Digital-to-Analog Converter (CSDAC is used as test core to validate the idea. The Test core (eg.8-bit CSDAC had power consumption of 347.63 mW. LPMT circuit alone consumes power of 6.3405 mW. This technique results in reduction of leakage power of 8-bit CSDAC by 5.51mW and increases the reliability of test core. Mentor Graphics ELDO and EZ-wave are used for simulations.

  7. FILTRES: a 128 channels VLSI mixed front-end readout electronic development for microstrip detectors

    International Nuclear Information System (INIS)

    Anstotz, F.; Hu, Y.; Michel, J.; Sohler, J.L.; Lachartre, D.

    1998-01-01

    We present a VLSI digital-analog readout electronic chain for silicon microstrip detectors. The characteristics of this circuit have been optimized for the high resolution tracker of the CERN CMS experiment. This chip consists of 128 channels at 50 μm pitch. Each channel is composed by a charge amplifier, a CR-RC shaper, an analog memory, an analog processor, an output FIFO read out serially by a multiplexer. This chip has been processed in the radiation hard technology DMILL. This paper describes the architecture of the circuit and presents test results of the 128 channel full chain chip. (orig.)

  8. PLA realizations for VLSI state machines

    Science.gov (United States)

    Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.

    1990-01-01

    A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.

  9. VLSI design of an RSA encryption/decryption chip using systolic array based architecture

    Science.gov (United States)

    Sun, Chi-Chia; Lin, Bor-Shing; Jan, Gene Eu; Lin, Jheng-Yi

    2016-09-01

    This article presents the VLSI design of a configurable RSA public key cryptosystem supporting the 512-bit, 1024-bit and 2048-bit based on Montgomery algorithm achieving comparable clock cycles of current relevant works but with smaller die size. We use binary method for the modular exponentiation and adopt Montgomery algorithm for the modular multiplication to simplify computational complexity, which, together with the systolic array concept for electric circuit designs effectively, lower the die size. The main architecture of the chip consists of four functional blocks, namely input/output modules, registers module, arithmetic module and control module. We applied the concept of systolic array to design the RSA encryption/decryption chip by using VHDL hardware language and verified using the TSMC/CIC 0.35 m 1P4 M technology. The die area of the 2048-bit RSA chip without the DFT is 3.9 × 3.9 mm2 (4.58 × 4.58 mm2 with DFT). Its average baud rate can reach 10.84 kbps under a 100 MHz clock.

  10. Point DCT VLSI Architecture for Emerging HEVC Standard

    Directory of Open Access Journals (Sweden)

    Ashfaq Ahmed

    2012-01-01

    Full Text Available This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4×4 up to 32×32, the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into sparse submatrices in order to reduce the multiplications. Finally, multiplications are completely eliminated using the lifting scheme. The proposed architecture sustains real-time processing of 1080P HD video codec running at 150 MHz.

  11. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  12. A novel VLSI processor for high-rate, high resolution spectroscopy

    CERN Document Server

    Pullia, Antonio; Gatti, E; Longoni, A; Buttler, W

    2000-01-01

    A novel time-variant VLSI shaper amplifier, suitable for multi-anode Silicon Drift Detectors or other multi-element solid-state X-ray detection systems, is proposed. The new read-out scheme has been conceived for demanding applications with synchrotron light sources, such as X-ray holography or EXAFS, where both high count-rates and high-energy resolutions are required. The circuit is of the linear time-variant class, accepts randomly distributed events and features: a finite-width (1-10 mu s) quasi-optimal weight function, an ultra-low-level energy discrimination (approx 150 eV), and a full compatibility for monolithic integration in CMOS technology. Its impulse response has a staircase-like shape, but the weight function (which is in general different from the impulse response in time-variant systems) is quasi trapezoidal. The operation principles of the new scheme as well as the first experimental results obtained with a prototype of the circuit are presented and discussed in the work.

  13. Carbon nanotube based VLSI interconnects analysis and design

    CERN Document Server

    Kaushik, Brajesh Kumar

    2015-01-01

    The brief primarily focuses on the performance analysis of CNT based interconnects in current research scenario. Different CNT structures are modeled on the basis of transmission line theory. Performance comparison for different CNT structures illustrates that CNTs are more promising than Cu or other materials used in global VLSI interconnects. The brief is organized into five chapters which mainly discuss: (1) an overview of current research scenario and basics of interconnects; (2) unique crystal structures and the basics of physical properties of CNTs, and the production, purification and applications of CNTs; (3) a brief technical review, the geometry and equivalent RLC parameters for different single and bundled CNT structures; (4) a comparative analysis of crosstalk and delay for different single and bundled CNT structures; and (5) various unique mixed CNT bundle structures and their equivalent electrical models.

  14. Vlsi implementation of flexible architecture for decision tree classification in data mining

    Science.gov (United States)

    Sharma, K. Venkatesh; Shewandagn, Behailu; Bhukya, Shankar Nayak

    2017-07-01

    The Data mining algorithms have become vital to researchers in science, engineering, medicine, business, search and security domains. In recent years, there has been a terrific raise in the size of the data being collected and analyzed. Classification is the main difficulty faced in data mining. In a number of the solutions developed for this problem, most accepted one is Decision Tree Classification (DTC) that gives high precision while handling very large amount of data. This paper presents VLSI implementation of flexible architecture for Decision Tree classification in data mining using c4.5 algorithm.

  15. VLSI ARCHITECTURE FOR IMAGE COMPRESSION THROUGH ADDER MINIMIZATION TECHNIQUE AT DCT STRUCTURE

    Directory of Open Access Journals (Sweden)

    N.R. Divya

    2014-08-01

    Full Text Available Data compression plays a vital role in multimedia devices to present the information in a succinct frame. Initially, the DCT structure is used for Image compression, which has lesser complexity and area efficient. Similarly, 2D DCT also has provided reasonable data compression, but implementation concern, it calls more multipliers and adders thus its lead to acquire more area and high power consumption. To contain an account of all, this paper has been dealt with VLSI architecture for image compression using Rom free DA based DCT (Discrete Cosine Transform structure. This technique provides high-throughput and most suitable for real-time implementation. In order to achieve this image matrix is subdivided into odd and even terms then the multiplication functions are removed by shift and add approach. Kogge_Stone_Adder techniques are proposed for obtaining a bit-wise image quality which determines the new trade-off levels as compared to the previous techniques. Overall the proposed architecture produces reduced memory, low power consumption and high throughput. MATLAB is used as a funding tool for receiving an input pixel and obtaining output image. Verilog HDL is used for implementing the design, Model Sim for simulation, Quatres II is used to synthesize and obtain details about power and area.

  16. Computer-aided design of microfluidic very large scale integration (mVLSI) biochips design automation, testing, and design-for-testability

    CERN Document Server

    Hu, Kai; Ho, Tsung-Yi

    2017-01-01

    This book provides a comprehensive overview of flow-based, microfluidic VLSI. The authors describe and solve in a comprehensive and holistic manner practical challenges such as control synthesis, wash optimization, design for testability, and diagnosis of modern flow-based microfluidic biochips. They introduce practical solutions, based on rigorous optimization and formal models. The technical contributions presented in this book will not only shorten the product development cycle, but also accelerate the adoption and further development of modern flow-based microfluidic biochips, by facilitating the full exploitation of design complexities that are possible with current fabrication techniques. Offers the first practical problem formulation for automated control-layer design in flow-based microfluidic biochips and provides a systematic approach for solving this problem; Introduces a wash-optimization method for cross-contamination removal; Presents a design-for-testability (DfT) technique that can achieve 100...

  17. Adaptive WTA with an analog VLSI neuromorphic learning chip.

    Science.gov (United States)

    Häfliger, Philipp

    2007-03-01

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.

  18. VLSI-based video event triggering for image data compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  19. Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.

    Science.gov (United States)

    Yu, Theodore; Cauwenberghs, Gert

    2009-01-01

    We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.

  20. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  1. An electron undulating ring for VLSI lithography

    International Nuclear Information System (INIS)

    Tomimasu, T.; Mikado, T.; Noguchi, T.; Sugiyama, S.; Yamazaki, T.

    1985-01-01

    The development of the ETL storage ring ''TERAS'' as an undulating ring has been continued to achieve a wide area exposure of synchrotron radiation (SR) in VLSI lithography. Stable vertical and horizontal undulating motions of stored beams are demonstrated around a horizontal design orbit of TERAS, using two small steering magnets of which one is used for vertical undulating and another for horizontal one. Each steering magnet is inserted into one of the periodic configulation of guide field elements. As one of useful applications of undulaing electron beams, a vertically wide exposure of SR has been demonstrated in the SR lithography. The maximum vertical deviation from the design orbit nCcurs near the steering magnet. The maximum vertical tilt angle of the undulating beam near the nodes is about + or - 2mrad for a steering magnetic field of 50 gauss. Another proposal is for hith-intensity, uniform and wide exposure of SR from a wiggler installed in TERAS, using vertical and horizontal undulating motions of stored beams. A 1.4 m long permanent magnet wiggler has been installed for this purpose in this April

  2. Convolving optically addressed VLSI liquid crystal SLM

    Science.gov (United States)

    Jared, David A.; Stirk, Charles W.

    1994-03-01

    We designed, fabricated, and tested an optically addressed spatial light modulator (SLM) that performs a 3 X 3 kernel image convolution using ferroelectric liquid crystal on VLSI technology. The chip contains a 16 X 16 array of current-mirror-based convolvers with a fixed kernel for finding edges. The pixels are located on 75 micron centers, and the modulators are 20 microns on a side. The array successfully enhanced edges in illumination patterns. We developed a high-level simulation tool (CON) for analyzing the performance of convolving SLM designs. CON has a graphical interface and simulates SLM functions using SPICE-like device models. The user specifies the pixel function along with the device parameters and nonuniformities. We discovered through analysis, simulation and experiment that the operation of current-mirror-based convolver pixels is degraded at low light levels by the variation of transistor threshold voltages inherent to CMOS chips. To function acceptable, the test SLM required the input image to have an minimum irradiance of 10 (mu) W/cm2. The minimum required irradiance can be further reduced by adding a photodarlington near the photodetector or by increasing the size of the transistors used to calculate the convolution.

  3. A novel low-voltage low-power analogue VLSI implementation of neural networks with on-chip back-propagation learning

    Science.gov (United States)

    Carrasco, Manuel; Garde, Andres; Murillo, Pilar; Serrano, Luis

    2005-06-01

    In this paper a novel design and implementation of a VLSI Analogue Neural Net based on Multi-Layer Perceptron (MLP) with on-chip Back Propagation (BP) learning algorithm suitable for the resolution of classification problems is described. In order to implement a general and programmable analogue architecture, the design has been carried out in a hierarchical way. In this way the net has been divided in synapsis-blocks and neuron-blocks providing an easy method for the analysis. These blocks basically consist on simple cells, which are mainly, the activation functions (NAF), derivatives (DNAF), multipliers and weight update circuits. The analogue design is based on current-mode translinear techniques using MOS transistors working in the weak inversion region in order to reduce both the voltage supply and the power consumption. Moreover, with the purpose of minimizing the noise, offset and distortion of even order, the topologies are fully-differential and balanced. The circuit, named ANNE (Analogue Neural NEt), has been prototyped and characterized as a proof of concept on CMOS AMI-0.5A technology occupying a total area of 2.7mm2. The chip includes two versions of neural nets with on-chip BP learning algorithm, which are respectively a 2-1 and a 2-2-1 implementations. The proposed nets have been experimentally tested using supply voltages from 2.5V to 1.8V, which is suitable for single cell lithium-ion battery supply applications. Experimental results of both implementations included in ANNE exhibit a good performance on solving classification problems. These results have been compared with other proposed Analogue VLSI implementations of Neural Nets published in the literature demonstrating that our proposal is very efficient in terms of occupied area and power consumption.

  4. New domain for image analysis: VLSI circuits testing, with Romuald, specialized in parallel image processing

    Energy Technology Data Exchange (ETDEWEB)

    Rubat Du Merac, C; Jutier, P; Laurent, J; Courtois, B

    1983-07-01

    This paper describes some aspects of specifying, designing and evaluating a specialized machine, Romuald, for the capture, coding, and processing of video and scanning electron microscope (SEM) pictures. First the authors present the functional organization of the process unit of romuald and its hardware, giving details of its behaviour. Then they study the capture and display unit which, thanks to its flexibility, enables SEM images coding. Finally, they describe an application which is now being developed in their laboratory: testing VLSI circuits with new methods: sem+voltage contrast and image processing. 15 references.

  5. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  6. An SEU analysis approach for error propagation in digital VLSI CMOS ASICs

    International Nuclear Information System (INIS)

    Baze, M.P.; Bartholet, W.G.; Dao, T.A.; Buchner, S.

    1995-01-01

    A critical issue in the development of ASIC designs is the ability to achieve first pass fabrication success. Unsuccessful fabrication runs have serious impact on ASIC costs and schedules. The ability to predict an ASICs radiation response prior to fabrication is therefore a key issue when designing ASICs for military and aerospace systems. This paper describes an analysis approach for calculating static bit error propagation in synchronous VLSI CMOS circuits developed as an aid for predicting the SEU response of ASIC's. The technique is intended for eventual application as an ASIC development simulation tool which can be used by circuit design engineers for performance evaluation during the pre-fabrication design process in much the same way that logic and timing simulators are used

  7. Monolithic active pixel sensors (MAPS) in a VLSI CMOS technology

    CERN Document Server

    Turchetta, R; Manolopoulos, S; Tyndel, M; Allport, P P; Bates, R; O'Shea, V; Hall, G; Raymond, M

    2003-01-01

    Monolithic Active Pixel Sensors (MAPS) designed in a standard VLSI CMOS technology have recently been proposed as a compact pixel detector for the detection of high-energy charged particle in vertex/tracking applications. MAPS, also named CMOS sensors, are already extensively used in visible light applications. With respect to other competing imaging technologies, CMOS sensors have several potential advantages in terms of low cost, low power, lower noise at higher speed, random access of pixels which allows windowing of region of interest, ability to integrate several functions on the same chip. This brings altogether to the concept of 'camera-on-a-chip'. In this paper, we review the use of CMOS sensors for particle physics and we analyse their performances in term of the efficiency (fill factor), signal generation, noise, readout speed and sensor area. In most of high-energy physics applications, data reduction is needed in the sensor at an early stage of the data processing before transfer of the data to ta...

  8. Positron emission tomographic images and expectation maximization: A VLSI architecture for multiple iterations per second

    International Nuclear Information System (INIS)

    Jones, W.F.; Byars, L.G.; Casey, M.E.

    1988-01-01

    A digital electronic architecture for parallel processing of the expectation maximization (EM) algorithm for Positron Emission tomography (PET) image reconstruction is proposed. Rapid (0.2 second) EM iterations on high resolution (256 x 256) images are supported. Arrays of two very large scale integration (VLSI) chips perform forward and back projection calculations. A description of the architecture is given, including data flow and partitioning relevant to EM and parallel processing. EM images shown are produced with software simulating the proposed hardware reconstruction algorithm. Projected cost of the system is estimated to be small in comparison to the cost of current PET scanners

  9. Digital VLSI systems design a design manual for implementation of projects on FPGAs and ASICs using Verilog

    CERN Document Server

    Ramachandran, S

    2007-01-01

    Digital VLSI Systems Design is written for an advanced level course using Verilog and is meant for undergraduates, graduates and research scholars of Electrical, Electronics, Embedded Systems, Computer Engineering and interdisciplinary departments such as Bio Medical, Mechanical, Information Technology, Physics, etc. It serves as a reference design manual for practicing engineers and researchers as well. Diligent freelance readers and consultants may also start using this book with ease. The book presents new material and theory as well as synthesis of recent work with complete Project Designs

  10. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  11. Driving a car with custom-designed fuzzy inferencing VLSI chips and boards

    Science.gov (United States)

    Pin, Francois G.; Watanabe, Yutaka

    1993-01-01

    Vehicle control in a-priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise, incomplete, or unreliable data. For such systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips were developed to add a fuzzy inferencing capability to real-time control systems. All inferencing rules on a chip are processed in parallel, allowing execution of the entire rule base in about 30 microseconds, and therefore, making control of 'reflex-type' of motions envisionable. The use of these boards and the approach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation in a-priori unknown environments are first discussed. Then how the human-like navigation scheme implemented on one of the qualitative inferencing boards was installed on a test-bed platform to investigate two control modes for driving a car in a-priori unknown environments on the basis of sparse and imprecise sensor data is described. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulation results as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and some human

  12. Mixed-Dimensionality VLSI-Type Configurable Tools for Virtual Prototyping of Biomicrofluidic Devices and Integrated Systems

    Science.gov (United States)

    Makhijani, Vinod B.; Przekwas, Andrzej J.

    2002-10-01

    This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).

  13. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  14. Compact Interconnection Networks Based on Quantum Dots

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarress, Katayoon; Spotnitz, Matthew

    2003-01-01

    Architectures that would exploit the distinct characteristics of quantum-dot cellular automata (QCA) have been proposed for digital communication networks that connect advanced digital computing circuits. In comparison with networks of wires in conventional very-large-scale integrated (VLSI) circuitry, the networks according to the proposed architectures would be more compact. The proposed architectures would make it possible to implement complex interconnection schemes that are required for some advanced parallel-computing algorithms and that are difficult (and in many cases impractical) to implement in VLSI circuitry. The difficulty of implementation in VLSI and the major potential advantage afforded by QCA were described previously in Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 42. To recapitulate: Wherever two wires in a conventional VLSI circuit cross each other and are required not to be in electrical contact with each other, there must be a layer of electrical insulation between them. This, in turn, makes it necessary to resort to a noncoplanar and possibly a multilayer design, which can be complex, expensive, and even impractical. As a result, much of the cost of designing VLSI circuits is associated with minimization of data routing and assignment of layers to minimize crossing of wires. Heretofore, these considerations have impeded the development of VLSI circuitry to implement complex, advanced interconnection schemes. On the other hand, with suitable design and under suitable operating conditions, QCA-based signal paths can be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes. The proposed architectures require two advances in QCA-based circuitry beyond basic QCA-based binary

  15. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits.

    Directory of Open Access Journals (Sweden)

    Danielle S Bassett

    2010-04-01

    Full Text Available Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.

  16. Real time track finding in a drift chamber with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-01-01

    In a test setup, a hardware neural network determined track parameters of charged particles traversing a drift chamber. Voltages proportional to the drift times in 6 cells of the 3-layer chamber were inputs to the Intel ETANN neural network chip which had been trained to give the slope and intercept of tracks. We compare network track parameters to those obtained from off-line track fits. To our knowledge this is the first on-line application of a VLSI neural network to a high energy physics detector. This test explored the potential of the chip and the practical problems of using it in a real world setting. We compare the chip performance to a neural network simulation on a conventional computer. We discuss possible applications of the chip in high energy physics detector triggers. (orig.)

  17. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System.

    Science.gov (United States)

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas; Chicca, Elisabetta

    2012-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  18. Emergent auditory feature tuning in a real-time neuromorphic VLSI system

    Directory of Open Access Journals (Sweden)

    Sadique eSheik

    2012-02-01

    Full Text Available Many sounds of ecological importance, such as communication calls, are characterised by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamocortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP, which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectrotemporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step towards the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  19. An area-efficient topology for VLSI implementation of Viterbi decoders and other shuffle-exchange type structures

    DEFF Research Database (Denmark)

    Sparsø, Jens; Jørgensen, Henrik Nordtorp; Paaske, Erik

    1991-01-01

    A topology for single-chip implementation of computing structures based on shuffle-exchange (SE)-type interconnection networks is presented. The topology is suited for structures with a small number of processing elements (i.e. 32-128) whose area cannot be neglected compared to the area required....... The topology has been used in a VLSI implementation of the add-compare-select (ACS) module of a fully parallel K=7, R=1/2 Viterbi decoder. Both the floor-planning issues and some of the important algorithm and circuit-level aspects of this design are discussed. The chip has been designed and fabricated in a 2....... The interconnection network occupies 32% of the area.>...

  20. On the impact of communication complexity in the design of parallel numerical algorithms

    Science.gov (United States)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  1. Digital VLSI design with Verilog a textbook from Silicon Valley Polytechnic Institute

    CERN Document Server

    Williams, John Michael

    2014-01-01

    This book is structured as a step-by-step course of study along the lines of a VLSI integrated circuit design project.  The entire Verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer-deserializer, including synthesizable PLLs.  The author includes everything an engineer needs for in-depth understanding of the Verilog language:  Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided in the downloadable files that accompany the book.  For readers with access to appropriate electronic design tools, all solutions can be developed, simulated, and synthesized as described in the book.   A partial list of design topics includes design partitioning, hierarchy decomposition, safe coding styles, back annotation, wrapper modules, concurrency, race conditions, assertion-based verification, clock synchronization, and design for test.   A concluding presentation of special topics inclu...

  2. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Science.gov (United States)

    McEwan, Alistair; van Schaik, André

    2003-12-01

    The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a) rate level functions for onset and steady-state response, (b) recovery after masking, (c) additivity, (d) two-component adaptation, (e) phase locking, (f) recovery of spontaneous activity, and (g) computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  3. Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI.

    Science.gov (United States)

    Yu, T; Sejnowski, T J; Cauwenberghs, G

    2011-10-01

    We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris-Lecar and Hodgkin-Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces.

  4. A neuromorphic VLSI device for implementing 2-D selective attention systems.

    Science.gov (United States)

    Indiveri, G

    2001-01-01

    Selective attention is a mechanism used to sequentially select and process salient subregions of the input space, while suppressing inputs arriving from nonsalient regions. By processing small amounts of sensory information in a serial fashion, rather than attempting to process all the sensory data in parallel, this mechanism overcomes the problem of flooding limited processing capacity systems with sensory inputs. It is found in many biological systems and can be a useful engineering tool for developing artificial systems that need to process in real-time sensory data. In this paper we present a neuromorphic hardware model of a selective attention mechanism implemented on a very large scale integration (VLSI) chip, using analog circuits. The chip makes use of a spike-based representation for receiving input signals, transmitting output signals and for shifting the selection of the attended input stimulus over time. It can be interfaced to neuromorphic sensors and actuators, for implementing multichip selective attention systems. We describe the characteristics of the circuits used in the architecture and present experimental data measured from the system.

  5. Macrocell Builder: IP-Block-Based Design Environment for High-Throughput VLSI Dedicated Digital Signal Processing Systems

    Directory of Open Access Journals (Sweden)

    Urard Pascal

    2006-01-01

    Full Text Available We propose an efficient IP-block-based design environment for high-throughput VLSI systems. The flow generates SystemC register-transfer-level (RTL architecture, starting from a Matlab functional model described as a netlist of functional IP. The refinement model inserts automatically control structures to manage delays induced by the use of RTL IPs. It also inserts a control structure to coordinate the execution of parallel clocked IP. The delays may be managed by registers or by counters included in the control structure. The flow has been used successfully in three real-world DSP systems. The experimentations show that the approach can produce efficient RTL architecture and allows to save huge amount of time.

  6. State-of-the-art assessment of testing and testability of custom LSI/VLSI circuits. Volume 8: Fault simulation

    Science.gov (United States)

    Breuer, M. A.; Carlan, A. J.

    1982-10-01

    Fault simulation is widely used by industry in such applications as scoring the fault coverage of test sequences and construction of fault dictionaries. For use in testing VLSI circuits a simulator is evaluated by its accuracy, i.e., modelling capability. To be accurate simulators must employ multi-valued logic in order to represent unknown signal values, impedance, signal transitions, etc., circuit delays such as transport rise/fall, inertial, and the fault modes it is capable of handling. Of the three basic fault simulators now in use (parallel, deductive and concurrent) concurrent fault simulation appears most promising.

  7. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    Science.gov (United States)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  8. A parallel VLSI architecture for a digital filter of arbitrary length using Fermat number transforms

    Science.gov (United States)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1982-01-01

    A parallel architecture for computation of the linear convolution of two sequences of arbitrary lengths using the Fermat number transform (FNT) is described. In particular a pipeline structure is designed to compute a 128-point FNT. In this FNT, only additions and bit rotations are required. A standard barrel shifter circuit is modified so that it performs the required bit rotation operation. The overlap-save method is generalized for the FNT to compute a linear convolution of arbitrary length. A parallel architecture is developed to realize this type of overlap-save method using one FNT and several inverse FNTs of 128 points. The generalized overlap save method alleviates the usual dynamic range limitation in FNTs of long transform lengths. Its architecture is regular, simple, and expandable, and therefore naturally suitable for VLSI implementation.

  9. Low-Complexity Hierarchical Mode Decision Algorithms Targeting VLSI Architecture Design for the H.264/AVC Video Encoder

    Directory of Open Access Journals (Sweden)

    Guilherme Corrêa

    2012-01-01

    Full Text Available In H.264/AVC, the encoding process can occur according to one of the 13 intraframe coding modes or according to one of the 8 available interframes block sizes, besides the SKIP mode. In the Joint Model reference software, the choice of the best mode is performed through exhaustive executions of the entire encoding process, which significantly increases the encoder's computational complexity and sometimes even forbids its use in real-time applications. Considering this context, this work proposes a set of heuristic algorithms targeting hardware architectures that lead to earlier selection of one encoding mode. The amount of repetitions of the encoding process is reduced by 47 times, at the cost of a relatively small cost in compression performance. When compared to other works, the fast hierarchical mode decision results are expressively more satisfactory in terms of computational complexity reduction, quality, and bit rate. The low-complexity mode decision architecture proposed is thus a very good option for real-time coding of high-resolution videos. The solution is especially interesting for embedded and mobile applications with support to multimedia systems, since it yields good compression rates and image quality with a very high reduction in the encoder complexity.

  10. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Directory of Open Access Journals (Sweden)

    Alistair McEwan

    2003-06-01

    Full Text Available The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a rate level functions for onset and steady-state response, (b recovery after masking, (c additivity, (d two-component adaptation, (e phase locking, (f recovery of spontaneous activity, and (g computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  11. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  12. 10 K gate I(2)L and 1 K component analog compatible bipolar VLSI technology - HIT-2

    Science.gov (United States)

    Washio, K.; Watanabe, T.; Okabe, T.; Horie, N.

    1985-02-01

    An advanced analog/digital bipolar VLSI technology that combines on the same chip 2-ns 10 K I(2)L gates with 1 K analog devices is proposed. The new technology, called high-density integration technology-2, is based on a new structure concept that consists of three major techniques: shallow grooved-isolation, I(2)L active layer etching, and I(2)L current gain increase. I(2)L circuits with 80-MHz maximum toggle frequency have developed compatibly with n-p-n transistors having a BV(CE0) of more than 10 V and an f(T) of 5 GHz, and lateral p-n-p transistors having an f(T) of 150 MHz.

  13. Radiation hardness tests with a demonstrator preamplifier circuit manufactured in silicon on sapphire (SOS) VLSI technology

    International Nuclear Information System (INIS)

    Bingefors, N.; Ekeloef, T.; Eriksson, C.; Paulsson, M.; Moerk, G.; Sjoelund, A.

    1992-01-01

    Samples of the preamplifier circuit, as well as of separate n and p channel transistors of the type contained in the circuit, were irradiated with gammas from a 60 Co source up to an integrated dose of 3 Mrad (30 kGy). The VLSI manufacturing technology used is the SOS4 process of ABB Hafo. A first analysis of the tests shows that the performance of the amplifier remains practically unaffected by the radiation for total doses up to 1 Mrad. At higher doses up to 3 Mrad the circuit amplification factor decreases by a factor between 4 and 5 whereas the output noise level remains unchanged. It is argued that it may be possible to reduce the decrease in amplification factor in future by optimizing the amplifier circuit design further. (orig.)

  14. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Directory of Open Access Journals (Sweden)

    Ying-Lun Chen

    2015-08-01

    Full Text Available A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO, and the feature extraction is carried out by the generalized Hebbian algorithm (GHA. To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  15. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm.

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-08-13

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  16. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-01-01

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193

  17. VLSI Architecture and Design

    OpenAIRE

    Johnsson, Lennart

    1980-01-01

    Integrated circuit technology is rapidly approaching a state where feature sizes of one micron or less are tractable. Chip sizes are increasing slowly. These two developments result in considerably increased complexity in chip design. The physical characteristics of integrated circuit technology are also changing. The cost of communication will be dominating making new architectures and algorithms both feasible and desirable. A large number of processors on a single chip will be possible....

  18. Complexity Results in Epistemic Planning

    DEFF Research Database (Denmark)

    Bolander, Thomas; Jensen, Martin Holm; Schwarzentruber, Francois

    2015-01-01

    Epistemic planning is a very expressive framework that extends automated planning by the incorporation of dynamic epistemic logic (DEL). We provide complexity results on the plan existence problem for multi-agent planning tasks, focusing on purely epistemic actions with propositional preconditions...

  19. Prototype architecture for a VLSI level zero processing system. [Space Station Freedom

    Science.gov (United States)

    Shi, Jianfei; Grebowsky, Gerald J.; Horner, Ward P.; Chesney, James R.

    1989-01-01

    The prototype architecture and implementation of a high-speed level zero processing (LZP) system are discussed. Due to the new processing algorithm and VLSI technology, the prototype LZP system features compact size, low cost, high processing throughput, and easy maintainability and increased reliability. Though extensive control functions have been done by hardware, the programmability of processing tasks makes it possible to adapt the system to different data formats and processing requirements. It is noted that the LZP system can handle up to 8 virtual channels and 24 sources with combined data volume of 15 Gbytes per orbit. For greater demands, multiple LZP systems can be configured in parallel, each called a processing channel and assigned a subset of virtual channels. The telemetry data stream will be steered into different processing channels in accordance with their virtual channel IDs. This super system can cope with a virtually unlimited number of virtual channels and sources. In the near future, it is expected that new disk farms with data rate exceeding 150 Mbps will be available from commercial vendors due to the advance in disk drive technology.

  20. How complex can integrated optical circuits become?

    NARCIS (Netherlands)

    Smit, M.K.; Hill, M.T.; Baets, R.G.F.; Bente, E.A.J.M.; Dorren, H.J.S.; Karouta, F.; Koenraad, P.M.; Koonen, A.M.J.; Leijtens, X.J.M.; Nötzel, R.; Oei, Y.S.; Waardt, de H.; Tol, van der J.J.G.M.; Khoe, G.D.

    2007-01-01

    The integration scale in Photonic Integrated Circuits will be pushed to VLSI-level in the coming decade. This will bring major changes in both application and manufacturing. In this paper developments in Photonic Integration are reviewed and the limits for reduction of device demensions are

  1. A Single Chip VLSI Implementation of a QPSK/SQPSK Demodulator for a VSAT Receiver Station

    Science.gov (United States)

    Kwatra, S. C.; King, Brent

    1995-01-01

    This thesis presents a VLSI implementation of a QPSK/SQPSK demodulator. It is designed to be employed in a VSAT earth station that utilizes the FDMA/TDM link. A single chip architecture is used to enable this chip to be easily employed in the VSAT system. This demodulator contains lowpass filters, integrate and dump units, unique word detectors, a timing recovery unit, a phase recovery unit and a down conversion unit. The design stages start with a functional representation of the system by using the C programming language. Then it progresses into a register based representation using the VHDL language. The layout components are designed based on these VHDL models and simulated. Component generators are developed for the adder, multiplier, read-only memory and serial access memory in order to shorten the design time. These sub-components are then block routed to form the main components of the system. The main components are block routed to form the final demodulator.

  2. CASTOR a VLSI CMOS mixed analog-digital circuit for low noise multichannel counting applications

    International Nuclear Information System (INIS)

    Comes, G.; Loddo, F.; Hu, Y.; Kaplon, J.; Ly, F.; Turchetta, R.; Bonvicini, V.; Vacchi, A.

    1996-01-01

    In this paper we present the design and first experimental results of a VLSI mixed analog-digital 1.2 microns CMOS circuit (CASTOR) for multichannel radiation detectors applications demanding low noise amplification and counting of radiation pulses. This circuit is meant to be connected to pixel-like detectors. Imaging can be obtained by counting the number of hits in each pixel during a user-controlled exposure time. Each channel of the circuit features an analog and a digital part. In the former one, a charge preamplifier is followed by a CR-RC shaper with an output buffer and a threshold discriminator. In the digital part, a 16-bit counter is present together with some control logic. The readout of the counters is done serially on a common tri-state output. Daisy-chaining is possible. A 4-channel prototype has been built. This prototype has been optimised for use in the digital radiography Syrmep experiment at the Elettra synchrotron machine in Trieste (Italy): its main design parameters are: shaping time of about 850 ns, gain of 190 mV/fC and ENC (e - rms)=60+17 C (pF). The counting rate per channel, limited by the analog part, can be as high as about 200 kHz. Characterisation of the circuit and first tests with silicon microstrip detectors are presented. They show the circuit works according to design specification and can be used for imaging applications. (orig.)

  3. VLSI Design of a Variable-Length FFT/IFFT Processor for OFDM-Based Communication Systems

    Directory of Open Access Journals (Sweden)

    Jen-Chih Kuo

    2003-12-01

    Full Text Available The technique of {orthogonal frequency division multiplexing (OFDM} is famous for its robustness against frequency-selective fading channel. This technique has been widely used in many wired and wireless communication systems. In general, the {fast Fourier transform (FFT} and {inverse FFT (IFFT} operations are used as the modulation/demodulation kernel in the OFDM systems, and the sizes of FFT/IFFT operations are varied in different applications of OFDM systems. In this paper, we design and implement a variable-length prototype FFT/IFFT processor to cover different specifications of OFDM applications. The cached-memory FFT architecture is our suggested VLSI system architecture to design the prototype FFT/IFFT processor for the consideration of low-power consumption. We also implement the twiddle factor butterfly {processing element (PE} based on the {{coordinate} rotation digital computer (CORDIC} algorithm, which avoids the use of conventional multiplication-and-accumulation unit, but evaluates the trigonometric functions using only add-and-shift operations. Finally, we implement a variable-length prototype FFT/IFFT processor with TSMC 0.35 μm 1P4M CMOS technology. The simulations results show that the chip can perform (64-2048-point FFT/IFFT operations up to 80 MHz operating frequency which can meet the speed requirement of most OFDM standards such as WLAN, ADSL, VDSL (256∼2K, DAB, and 2K-mode DVB.

  4. Implementation of neuromorphic systems: from discrete components to analog VLSI chips (testing and communication issues).

    Science.gov (United States)

    Dante, V; Del Giudice, P; Mattia, M

    2001-01-01

    We review a series of implementations of electronic devices aiming at imitating to some extent structure and function of simple neural systems, with particular emphasis on communication issues. We first provide a short overview of general features of such "neuromorphic" devices and the implications of setting up "tests" for them. We then review the developments directly related to our work at the Istituto Superiore di Sanità (ISS): a pilot electronic neural network implementing a simple classifier, autonomously developing internal representations of incoming stimuli; an output network, collecting information from the previous classifier and extracting the relevant part to be forwarded to the observer; an analog, VLSI (very large scale integration) neural chip implementing a recurrent network of spiking neurons and plastic synapses, and the test setup for it; a board designed to interface the standard PCI (peripheral component interconnect) bus of a PC with a special purpose, asynchronous bus for communication among neuromorphic chips; a short and preliminary account of an application-oriented device, taking advantage of the above communication infrastructure.

  5. Robust working memory in an asynchronously spiking neural network realized in neuromorphic VLSI

    Directory of Open Access Journals (Sweden)

    Massimiliano eGiulioni

    2012-02-01

    Full Text Available We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory of integrate-and-fire (LIF neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of ‘high’ and ‘low’-firing activity. Depending on the overall excitability, transitions to the ‘high’ state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the ‘high’ state retains a working memory of a stimulus until well after its release. In the latter case, ‘high’ states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated corrupted ‘high’ states comprising neurons of both excitatory populations. Within a basin of attraction, the network dynamics corrects such states and re-establishes the prototypical ‘high’ state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  6. Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI.

    Science.gov (United States)

    Giulioni, Massimiliano; Camilleri, Patrick; Mattia, Maurizio; Dante, Vittorio; Braun, Jochen; Del Giudice, Paolo

    2011-01-01

    We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of "high" and "low"-firing activity. Depending on the overall excitability, transitions to the "high" state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the "high" state retains a "working memory" of a stimulus until well after its release. In the latter case, "high" states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated "corrupted" "high" states comprising neurons of both excitatory populations. Within a "basin of attraction," the network dynamics "corrects" such states and re-establishes the prototypical "high" state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  7. Development of an integrated circuit VLSI used for time measurement and selective read out in the front end electronics of the DIRC for the Babar experience at SLAC; Developpement d'un circuit integre VLSI assurant mesure de temps et lecture selective dans l'electronique frontale du compteur DIRC de l'experience babar a slac

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1999-07-01

    This thesis deals with the design the development and the tests of an integrated circuit VLSI, supplying selective read and time measure for 16 channels. This circuit has been developed for a experiment of particles physics, BABAR, that will take place at SLAC (Stanford Linear Accelerator Center). A first part describes the physical stakes of the experiment, the electronic architecture and the place of the developed circuit in the research program. The second part presents the technical drawings of the circuit, the prototypes leading to the final design and the validity tests. (A.L.B.)

  8. Controlling Underwater Robots with Electronic Nervous Systems

    Directory of Open Access Journals (Sweden)

    Joseph Ayers

    2010-01-01

    Full Text Available We are developing robot controllers based on biomimetic design principles. The goal is to realise the adaptive capabilities of the animal models in natural environments. We report feasibility studies of a hybrid architecture that instantiates a command and coordinating level with computed discrete-time map-based (DTM neuronal networks and the central pattern generators with analogue VLSI (Very Large Scale Integration electronic neuron (aVLSI networks. DTM networks are realised using neurons based on a 1-D or 2-D Map with two additional parameters that define silent, spiking and bursting regimes. Electronic neurons (ENs based on Hindmarsh–Rose (HR dynamics can be instantiated in analogue VLSI and exhibit similar behaviour to those based on discrete components. We have constructed locomotor central pattern generators (CPGs with aVLSI networks that can be modulated to select different behaviours on the basis of selective command input. The two technologies can be fused by interfacing the signals from the DTM circuits directly to the aVLSI CPGs. Using DTMs, we have been able to simulate complex sensory fusion for rheotaxic behaviour based on both hydrodynamic and optical flow senses. We will illustrate aspects of controllers for ambulatory biomimetic robots. These studies indicate that it is feasible to fabricate an electronic nervous system controller integrating both aVLSI CPGs and layered DTM exteroceptive reflexes.

  9. CMOS VLSI Active-Pixel Sensor for Tracking

    Science.gov (United States)

    Pain, Bedabrata; Sun, Chao; Yang, Guang; Heynssens, Julie

    2004-01-01

    An architecture for a proposed active-pixel sensor (APS) and a design to implement the architecture in a complementary metal oxide semiconductor (CMOS) very-large-scale integrated (VLSI) circuit provide for some advanced features that are expected to be especially desirable for tracking pointlike features of stars. The architecture would also make this APS suitable for robotic- vision and general pointing and tracking applications. CMOS imagers in general are well suited for pointing and tracking because they can be configured for random access to selected pixels and to provide readout from windows of interest within their fields of view. However, until now, the architectures of CMOS imagers have not supported multiwindow operation or low-noise data collection. Moreover, smearing and motion artifacts in collected images have made prior CMOS imagers unsuitable for tracking applications. The proposed CMOS imager (see figure) would include an array of 1,024 by 1,024 pixels containing high-performance photodiode-based APS circuitry. The pixel pitch would be 9 m. The operations of the pixel circuits would be sequenced and otherwise controlled by an on-chip timing and control block, which would enable the collection of image data, during a single frame period, from either the full frame (that is, all 1,024 1,024 pixels) or from within as many as 8 different arbitrarily placed windows as large as 8 by 8 pixels each. A typical prior CMOS APS operates in a row-at-a-time ( grolling-shutter h) readout mode, which gives rise to exposure skew. In contrast, the proposed APS would operate in a sample-first/readlater mode, suppressing rolling-shutter effects. In this mode, the analog readout signals from the pixels corresponding to the windows of the interest (which windows, in the star-tracking application, would presumably contain guide stars) would be sampled rapidly by routing them through a programmable diagonal switch array to an on-chip parallel analog memory array. The

  10. Preliminary results for complexation of Pu with humic acid

    Energy Technology Data Exchange (ETDEWEB)

    Guczi, J.; Szabo, G. [National Research Inst. for Radiobiology and Radiohygi ene, Budapest, H-1775 (Hungary)]. e-mail: guczi@hp.osski.hu; Reiller, P. [CEA, CE Sac lay, Nuclear Energy Division/DPC/SERC, Laboratoire de Speciation des Radionuclei des et des Molecules, F-91191 Gif-sue-Yvette (France); Bulman, R.A. [Radiation Protection Division Division, Health Protection Agency, Chilton, Didcot (United Kingdom); Geckeis, H. [FZK - Inst. fuer Nukleare Entsorgung, Karlsruhe (Germany)

    2007-06-15

    Interaction of plutonium with humic substances has been investigated by a batch method use of the surface bound humic acid from perchlorate solutions at pH 4-6. By using these novel solid phases, complexing capacities and interaction constants are obtained. The complexing behavior of plutonium is analyzed. Pu(IV)-humate conditional stability constants have been evaluated from data obtained from these experiments by using non-linear regression of binding isotherms. The results have been interpreted in terms of complexes of 1:1 stoichiometry.

  11. Design and implementation of efficient low complexity biomedical artifact canceller for nano devices

    Directory of Open Access Journals (Sweden)

    Md Zia Ur RAHMAN

    2016-07-01

    Full Text Available In the current day scenario, with the rapid development of communication technology remote health care monitoring becomes as an intense research area. In remote health care monitoring, the primary aim is to facilitate the doctor with high resolution biomedical data. In order to cancel various artifacts in clinical environment in this paper we propose some efficient adaptive noise cancellation techniques. To obtain low computational complexity we combine clipping the data or error with Least Mean Square (LMS algorithm. This results sign regressor LMS (SRLMS, sign LMS (SLMS and sign LMS (SSLMS algorithms. Using these algorithms, we design Very-large-scale integration (VLSI architectures of various Biomedical Noise Cancellers (BNCs. In addition, the filtering capabilities of the proposed implementations are measured using real biomedical signals. Among the various BNCs tested, SRLMS based BNC is found to be better with reference to convergence speed, filtering capability and computational complexity. The main advantage of this technique is it needs only one multiplication to compute next weight. In this manner SRLMS based BNC is independent of filter length with reference to its computations. Whereas, the average signal to noise ratio achieved in the noise cancellation experiments are recorded as 7.1059dBs, 7.1776dBs, 6.2795dBs and 5.8847dBs for various BNCs based on LMS, SRLMS, SLMS and SSSLMS algorithms respectively. Based on the filtering characteristics, convergence and computational complexity, the proposed SRLMS based BNC architecture is well suited for nanotechnology applications.

  12. Neuromorphic neural interfaces: from neurophysiological inspiration to biohybrid coupling with nervous systems

    Science.gov (United States)

    Broccard, Frédéric D.; Joshi, Siddharth; Wang, Jun; Cauwenberghs, Gert

    2017-08-01

    Objective. Computation in nervous systems operates with different computational primitives, and on different hardware, than traditional digital computation and is thus subjected to different constraints from its digital counterpart regarding the use of physical resources such as time, space and energy. In an effort to better understand neural computation on a physical medium with similar spatiotemporal and energetic constraints, the field of neuromorphic engineering aims to design and implement electronic systems that emulate in very large-scale integration (VLSI) hardware the organization and functions of neural systems at multiple levels of biological organization, from individual neurons up to large circuits and networks. Mixed analog/digital neuromorphic VLSI systems are compact, consume little power and operate in real time independently of the size and complexity of the model. Approach. This article highlights the current efforts to interface neuromorphic systems with neural systems at multiple levels of biological organization, from the synaptic to the system level, and discusses the prospects for future biohybrid systems with neuromorphic circuits of greater complexity. Main results. Single silicon neurons have been interfaced successfully with invertebrate and vertebrate neural networks. This approach allowed the investigation of neural properties that are inaccessible with traditional techniques while providing a realistic biological context not achievable with traditional numerical modeling methods. At the network level, populations of neurons are envisioned to communicate bidirectionally with neuromorphic processors of hundreds or thousands of silicon neurons. Recent work on brain-machine interfaces suggests that this is feasible with current neuromorphic technology. Significance. Biohybrid interfaces between biological neurons and VLSI neuromorphic systems of varying complexity have started to emerge in the literature. Primarily intended as a

  13. Complex VLSI Feature Comparison for Commercial Microelectronics Verification

    Science.gov (United States)

    2014-03-27

    corruption , tampering and counterfeiting due to these technologies’ extremely sensitive purposes. Adversarial intervention in the IC design and...counterfeiting in its motive: whereas counterfeiting is usually motivated by greed , tampering is an act of espionage or sabotage [26]. Finally, poor

  14. Test beam results from the prototype L3 silicon microvertex detector

    International Nuclear Information System (INIS)

    Adam, A.; Adriani, O.; Ahlen, S.

    1993-11-01

    We report test beam results on the overall system performance of two modules of the L3 Silicon Microvertex Detector exposed to a 50 GeV pion beam. Each module consists of two AC coupled double-sided silicon strip detectors equipped with VLSI readout electronics. The associated data acquisition system comprises an 8 bit FADC, an optical data transmission circuit, a specialized data reduction processor and a synchronization module. A spatial resolution of 7.5 μm and 14 μm for the two coordinates and a detection efficiency in excess of 99% are measured. (orig.)

  15. Recent Results with a segmented Hybrid Photon Detector for a novel parallax-free PET Scanner for Brain Imaging

    CERN Document Server

    Braem, André; Joram, Christian; Mathot, Serge; Séguinot, Jacques; Weilhammer, Peter; Ciocia, F; De Leo, R; Nappi, E; Vilardi, I; Argentieri, A; Corsi, F; Dragone, A; Pasqua, D

    2007-01-01

    We describe the design, fabrication and test results of a segmented Hybrid Photon Detector with integrated auto-triggering front-end electronics. Both the photodetector and its VLSI readout electronics are custom designed and have been tailored to the requirements of a recently proposed novel geometrical concept of a Positron Emission Tomograph. Emphasis is laid on the PET specific features of the device. The detector has been fabricated in the photocathode facility at CERN.

  16. Evaluation framework for K-best sphere decoders

    KAUST Repository

    Shen, Chungan; Eltawil, Ahmed M.; Salama, Khaled N.

    2010-01-01

    or receive antennas. Tree-searching type decoder structures such as Sphere decoder and K-best decoder present an interesting trade-off between complexity and performance. Many algorithmic developments and VLSI implementations have been reported in literature

  17. Petri Nets

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE ... In Part 1 of this two-part article, we have seen im- ..... mable logic controller and VLSI arrays, office automation systems, workflow management systems, ... complex discrete event and real-time systems; and Petri nets.

  18. Best Proximity Point Results in Complex Valued Metric Spaces

    Directory of Open Access Journals (Sweden)

    Binayak S. Choudhury

    2014-01-01

    complex valued metric spaces. We treat the problem as that of finding the global optimal solution of a fixed point equation although the exact solution does not in general exist. We also define and use the concept of P-property in such spaces. Our results are illustrated with examples.

  19. A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory.

    Science.gov (United States)

    Chicca, E; Badoni, D; Dante, V; D'Andreagiovanni, M; Salina, G; Carota, L; Fusi, S; Del Giudice, P

    2003-01-01

    Electronic neuromorphic devices with on-chip, on-line learning should be able to modify quickly the synaptic couplings to acquire information about new patterns to be stored (synaptic plasticity) and, at the same time, preserve this information on very long time scales (synaptic stability). Here, we illustrate the electronic implementation of a simple solution to this stability-plasticity problem, recently proposed and studied in various contexts. It is based on the observation that reducing the analog depth of the synapses to the extreme (bistable synapses) does not necessarily disrupt the performance of the device as an associative memory, provided that 1) the number of neurons is large enough; 2) the transitions between stable synaptic states are stochastic; and 3) learning is slow. The drastic reduction of the analog depth of the synaptic variable also makes this solution appealing from the point of view of electronic implementation and offers a simple methodological alternative to the technological solution based on floating gates. We describe the full custom analog very large-scale integration (VLSI) realization of a small network of integrate-and-fire neurons connected by bistable deterministic plastic synapses which can implement the idea of stochastic learning. In the absence of stimuli, the memory is preserved indefinitely. During the stimulation the synapse undergoes quick temporary changes through the activities of the pre- and postsynaptic neurons; those changes stochastically result in a long-term modification of the synaptic efficacy. The intentionally disordered pattern of connectivity allows the system to generate a randomness suited to drive the stochastic selection mechanism. We check by a suitable stimulation protocol that the stochastic synaptic plasticity produces the expected pattern of potentiation and depression in the electronic network.

  20. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    Science.gov (United States)

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights

  1. How to build VLSI-efficient neural chips

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-02-01

    This paper presents several upper and lower bounds for the number-of-bits required for solving a classification problem, as well as ways in which these bounds can be used to efficiently build neural network chips. The focus will be on complexity aspects pertaining to neural networks: (1) size complexity and depth (size) tradeoffs, and (2) precision of weights and thresholds as well as limited interconnectivity. They show difficult problems-exponential growth in either space (precision and size) and/or time (learning and depth)-when using neural networks for solving general classes of problems (particular cases may enjoy better performances). The bounds for the number-of-bits required for solving a classification problem represent the first step of a general class of constructive algorithms, by showing how the quantization of the input space could be done in O (m{sup 2}n) steps. Here m is the number of examples, while n is the number of dimensions. The second step of the algorithm finds its roots in the implementation of a class of Boolean functions using threshold gates. It is substantiated by mathematical proofs for the size O (mn/{Delta}), and the depth O [log(mn)/log{Delta}] of the resulting network (here {Delta} is the maximum fan in). Using the fan in as a parameter, a full class of solutions can be designed. The third step of the algorithm represents a reduction of the size and an increase of its generalization capabilities. Extensions by using analogue COMPARISONs, allows for real inputs, and increase the generalization capabilities at the expense of longer training times. Finally, several solutions which can lower the size of the resulting neural network are detailed. The interesting aspect is that they are obtained for limited, or even constant, fan-ins. In support of these claims many simulations have been performed and are called upon.

  2. Complex decision-making: initial results of an empirical study

    OpenAIRE

    Pier Luigi Baldi

    2011-01-01

    A brief survey of key literature on emotions and decision-making introduces an empirical study of a group of university students exploring the effects of decision-making complexity on error risk. The results clearly show that decision-making under stress in the experimental group produces significantly more errors than in the stress-free control group.

  3. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  4. Complex decision-making: initial results of an empirical study

    Directory of Open Access Journals (Sweden)

    Pier Luigi Baldi

    2011-09-01

    Full Text Available A brief survey of key literature on emotions and decision-making introduces an empirical study of a group of university students exploring the effects of decision-making complexity on error risk. The results clearly show that decision-making under stress in the experimental group produces significantly more errors than in the stress-free control group.

  5. Results of complex treatment of Hodgkin's disease

    International Nuclear Information System (INIS)

    Kolygin, B.A.; Lebedev, S.V.; Borodina, A.F.; Kochurova, N.V.; Malinin, A.P.; Safonova, S.A.; Punanov, Yu.A.

    2000-01-01

    The evaluation of remote results of the complex treatment (polychemotherapy plus radiotherapy) for identification of the forecasting factor which may be applied, by stratification into the risk groups, is carried out. The group of 334 children up to 15 years with lymphogranulomatosis, subjected to not less than 2 cycles of inductive polychemotherapy and consolidating radiotherapy, is analyzed. The irradiation was conducted at the radiotherapeutic devices ROCUS LUE-25 and LUEV-15 M1. The complete remission after the treatment program was fixed by 95.1% of the patients the partial remission-by 6.3%; no effect was noted by 0.6% of the patients. Actuarial 10-year survival constituted 85.9%, the frequency of nonrelapsing flow - 74.3% [ru

  6. Microfluidic very large-scale integration for biochips: Technology, testing and fault-tolerant design

    DEFF Research Database (Denmark)

    Araci, Ismail Emre; Pop, Paul; Chakrabarty, Krishnendu

    2015-01-01

    of this paper is on continuous-flow biochips, where the basic building block is a microvalve. By combining these microvalves, more complex units such as mixers, switches, multiplexers can be built, hence the name of the technology, “microfluidic Very Large-Scale Integration” (mVLSI). A roadblock......Microfluidic biochips are replacing the conventional biochemical analyzers by integrating all the necessary functions for biochemical analysis using microfluidics. Biochips are used in many application areas, such as, in vitro diagnostics, drug discovery, biotech and ecology. The focus...... presents the state-of-the-art in the mVLSI platforms and emerging research challenges in the area of continuous-flow microfluidics, focusing on testing techniques and fault-tolerant design....

  7. Design and Implementation of a Sort-Free K-Best Sphere Decoder

    KAUST Repository

    Mondal, Sudip

    2012-10-18

    This paper describes the design and VLSI architecture for a 4x4 breadth first K-Best MIMO decoder using a 64 QAM scheme. A novel sort free approach to path extension, as well as quantized metrics result in a high throughput VLSI architecture with lower power and area consumption compared to state of the art published systems. Functionality is confirmed via an FPGA implementation on a Xilinx Virtex II Pro FPGA. Comparison of simulation and measurements are given and FPGA utilization figures are provided. Finally, VLSI architectural tradeoffs are explored for a synthesized ASIC implementation in a 65nm CMOS technology.

  8. BioCMOS Interfaces and Co-Design

    CERN Document Server

    Carrara, Sandro

    2013-01-01

    The application of CMOS circuits and ASIC VLSI systems to problems in medicine and system biology has led to the emergence of Bio/CMOS Interfaces and Co-Design as an exciting and rapidly growing area of research. The mutual inter-relationships between VLSI-CMOS design and the biophysics of molecules interfacing with silicon and/or onto metals has led to the emergence of the interdisciplinary engineering approach to Bio/CMOS interfaces. This new approach, facilitated by 3D circuit design and nanotechnology, has resulted in new concepts and applications for VLSI systems in the bio-world. This book offers an invaluable reference to the state-of-the-art in Bio/CMOS interfaces. It describes leading-edge research in the field of CMOS design and VLSI development for applications requiring integration of biological molecules onto the chip. It provides multidisciplinary content ranging from biochemistry to CMOS design in order to address Bio/CMOS interface co-design in bio-sensing applications.

  9. Lukasiewicz-Moisil Many-Valued Logic Algebra of Highly-Complex Systems

    Directory of Open Access Journals (Sweden)

    James F. Glazebrook

    2010-06-01

    Full Text Available The fundamentals of Lukasiewicz-Moisil logic algebras and their applications to complex genetic network dynamics and highly complex systems are presented in the context of a categorical ontology theory of levels, Medical Bioinformatics and self-organizing, highly complex systems. Quantum Automata were defined in refs.[2] and [3] as generalized, probabilistic automata with quantum state spaces [1]. Their next-state functions operate through transitions between quantum states defined by the quantum equations of motions in the SchrÄodinger representation, with both initial and boundary conditions in space-time. A new theorem is proven which states that the category of quantum automata and automata-homomorphisms has both limits and colimits. Therefore, both categories of quantum automata and classical automata (sequential machines are bicomplete. A second new theorem establishes that the standard automata category is a subcategory of the quantum automata category. The quantum automata category has a faithful representation in the category of Generalized (M,R-Systems which are open, dynamic biosystem networks [4] with de¯ned biological relations that represent physiological functions of primordial(s, single cells and the simpler organisms. A new category of quantum computers is also defined in terms of reversible quantum automata with quantum state spaces represented by topological groupoids that admit a local characterization through unique, quantum Lie algebroids. On the other hand, the category of n-Lukasiewicz algebras has a subcategory of centered n-Lukasiewicz algebras (as proven in ref. [2] which can be employed to design and construct subcategories of quantum automata based on n-Lukasiewicz diagrams of existing VLSI. Furthermore, as shown in ref. [2] the category of centered n-Lukasiewicz algebras and the category of Boolean algebras are naturally equivalent. A `no-go' conjecture is also proposed which states that Generalized (M

  10. The effect of query complexity on Web searching results

    Directory of Open Access Journals (Sweden)

    B.J. Jansen

    2000-01-01

    Full Text Available This paper presents findings from a study of the effects of query structure on retrieval by Web search services. Fifteen queries were selected from the transaction log of a major Web search service in simple query form with no advanced operators (e.g., Boolean operators, phrase operators, etc. and submitted to 5 major search engines - Alta Vista, Excite, FAST Search, Infoseek, and Northern Light. The results from these queries became the baseline data. The original 15 queries were then modified using the various search operators supported by each of the 5 search engines for a total of 210 queries. Each of these 210 queries was also submitted to the applicable search service. The results obtained were then compared to the baseline results. A total of 2,768 search results were returned by the set of all queries. In general, increasing the complexity of the queries had little effect on the results with a greater than 70% overlap in results, on average. Implications for the design of Web search services and directions for future research are discussed.

  11. Complexity optimization and high-throughput low-latency hardware implementation of a multi-electrode spike-sorting algorithm.

    Science.gov (United States)

    Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix

    2015-03-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.

  12. Laboratory and test beam results from a large-area silicon drift detector

    CERN Document Server

    Bonvicini, V; Giubellino, P; Gregorio, A; Idzik, M; Kolojvari, A A; Montaño-Zetina, L M; Nouais, D; Petta, C; Rashevsky, A; Randazzo, N; Reito, S; Tosello, F; Vacchi, A; Vinogradov, L I; Zampa, N

    2000-01-01

    A very large-area (6.75*8 cm/sup 2/) silicon drift detector with integrated high-voltage divider has been designed, produced and fully characterised in the laboratory by means of ad hoc designed MOS injection electrodes. The detector is of the "butterfly" type, the sensitive area being subdivided into two regions with a maximum drift length of 3.3 cm. The device was also tested in a pion beam (at the CERN PS) tagged by means of a microstrip detector telescope. Bipolar VLSI front-end cells featuring a noise of 250 e/sup -/ RMS at 0 pF with a slope of 40 e/sup -//pF have been used to read out the signals. The detector showed an excellent stability and featured the expected characteristics. Some preliminary results will be presented. (12 refs).

  13. FPGA Implementation of one-dimensional and two-dimensional cellular automata

    International Nuclear Information System (INIS)

    D'Antone, I.

    1999-01-01

    This report describes the hardware implementation of one-dimensional and two-dimensional cellular automata (CAs). After a general introduction to the cellular automata, we consider a one-dimensional CA used to implement pseudo-random techniques in built-in self test for VLSI. Due to the increase in digital ASIC complexity, testing is becoming one of the major costs in the VLSI production. The high electronics complexity, used in particle physics experiments, demands higher reliability than in the past time. General criterions are given to evaluate the feasibility of the circuit used for testing and some quantitative parameters are underlined to optimize the architecture of the cellular automaton. Furthermore, we propose a two-dimensional CA that performs a peak finding algorithm in a matrix of cells mapping a sub-region of a calorimeter. As in a two-dimensional filtering process, the peaks of the energy clusters are found in one evolution step. This CA belongs to Wolfram class II cellular automata. Some quantitative parameters are given to optimize the architecture of the cellular automaton implemented in a commercial field programmable gate array (FPGA)

  14. Hand-assisted Approach as a Model to Teach Complex Laparoscopic Hepatectomies: Preliminary Results.

    Science.gov (United States)

    Makdissi, Fabio F; Jeismann, Vagner B; Kruger, Jaime A P; Coelho, Fabricio F; Ribeiro-Junior, Ulysses; Cecconello, Ivan; Herman, Paulo

    2017-08-01

    Currently, there are limited and scarce models to teach complex liver resections by laparoscopy. The aim of this study is to present a hand-assisted technique to teach complex laparoscopic hepatectomies for fellows in liver surgery. Laparoscopic hand-assisted approach for resections of liver lesions located in posterosuperior segments (7, 6/7, 7/8, 8) was performed by the trainees with guidance and intermittent intervention of a senior surgeon. Data as: (1) percentage of time that the senior surgeon takes the surgery as main surgeon, (2) need for the senior surgeon to finish the procedure, (3) necessity of conversion, (4) bleeding with hemodynamic instability, (5) need for transfusion, (6) oncological surgical margins, were evaluated. In total, 12 cases of complex laparoscopic liver resections were performed by the trainee. All cases included deep lesions situated on liver segments 7 or 8. The senior surgeon intervention occurred in a mean of 20% of the total surgical time (range, 0% to 50%). A senior intervention >20% was necessary in 2 cases. There was no need for conversion or reoperation. Neither major bleeding nor complications resulted from the teaching program. All surgical margins were clear. This preliminary report shows that hand-assistance is a safe way to teach complex liver resections without compromising patient safety or oncological results. More cases are still necessary to draw definitive conclusions about this teaching method.

  15. Test methods of total dose effects in very large scale integrated circuits

    International Nuclear Information System (INIS)

    He Chaohui; Geng Bin; He Baoping; Yao Yujuan; Li Yonghong; Peng Honglun; Lin Dongsheng; Zhou Hui; Chen Yusheng

    2004-01-01

    A kind of test method of total dose effects (TDE) is presented for very large scale integrated circuits (VLSI). The consumption current of devices is measured while function parameters of devices (or circuits) are measured. Then the relation between data errors and consumption current can be analyzed and mechanism of TDE in VLSI can be proposed. Experimental results of 60 Co γ TDEs are given for SRAMs, EEPROMs, FLASH ROMs and a kind of CPU

  16. VLSI Implementation of Hybrid Wave-Pipelined 2D DWT Using Lifting Scheme

    Directory of Open Access Journals (Sweden)

    G. Seetharaman

    2008-01-01

    Full Text Available A novel approach is proposed in this paper for the implementation of 2D DWT using hybrid wave-pipelining (WP. A digital circuit may be operated at a higher frequency by using either pipelining or WP. Pipelining requires additional registers and it results in more area, power dissipation and clock routing complexity. Wave-pipelining does not have any of these disadvantages but requires complex trial and error procedure for tuning the clock period and clock skew between input and output registers. In this paper, a hybrid scheme is proposed to get the benefits of both pipelining and WP techniques. In this paper, two automation schemes are proposed for the implementation of 2D DWT using hybrid WP on both Xilinx, San Jose, CA, USA and Altera FPGAs. In the first scheme, Built-in self-test (BIST approach is used to choose the clock skew and clock period for I/O registers between the wave-pipelined blocks. In the second approach, an on-chip soft-core processor is used to choose the clock skew and clock period. The results for the hybrid WP are compared with nonpipelined and pipelined approaches. From the implementation results, the hybrid WP scheme requires the same area but faster than the nonpipelined scheme by a factor of 1.25–1.39. The pipelined scheme is faster than the hybrid scheme by a factor of 1.15–1.39 at the cost of an increase in the number of registers by a factor of 1.78–2.73, increase in the number of LEs by a factor of 1.11–1.32 and it increases the clock routing complexity.

  17. Efficient Algorithm and Architecture of Critical-Band Transform for Low-Power Speech Applications

    Directory of Open Access Journals (Sweden)

    Gan Woon-Seng

    2007-01-01

    Full Text Available An efficient algorithm and its corresponding VLSI architecture for the critical-band transform (CBT are developed to approximate the critical-band filtering of the human ear. The CBT consists of a constant-bandwidth transform in the lower frequency range and a Brown constant- transform (CQT in the higher frequency range. The corresponding VLSI architecture is proposed to achieve significant power efficiency by reducing the computational complexity, using pipeline and parallel processing, and applying the supply voltage scaling technique. A 21-band Bark scale CBT processor with a sampling rate of 16 kHz is designed and simulated. Simulation results verify its suitability for performing short-time spectral analysis on speech. It has a better fitting on the human ear critical-band analysis, significantly fewer computations, and therefore is more energy-efficient than other methods. With a 0.35 m CMOS technology, it calculates a 160-point speech in 4.99 milliseconds at 234 kHz. The power dissipation is 15.6 W at 1.1 V. It achieves 82.1 power reduction as compared to a benchmark 256-point FFT processor.

  18. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  19. Methodology and Results of Mathematical Modelling of Complex Technological Processes

    Science.gov (United States)

    Mokrova, Nataliya V.

    2018-03-01

    The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.

  20. The Study for Results of Complex Cystic Breast Masses by Biopsy on Ultrasound

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hye Kyoung [Dept. of Radiology, Yangji General Hospital, Kwangju (Korea, Republic of); Dong, Kyung Rae [Dept. of Radiological Technology, Gwangju Health College, Kwangju (Korea, Republic of)

    2008-06-15

    We examined the roles of Ultrasonography conductors by analyzing the results of tissue biopsy of complex cystic masse under the guidance of breast US. This study was performed to a group of 178 who showed breast US indicating complex cystic masses among 342 patients who were definitely diagnosed by tissue biopsies and operations in our hospital from June 30th, 2003 to June 30th, 2007. The evaluation of tissues around, calcification, the distribution state of blood flow were excluded from the analysis subjects and logic 200 made by GE corporation and gun for core biopsy(Kimal corp., K7/MBD23) were used in this study. The biopsy results of 178 subjects showed FCC (fibrocystic change)(n=56 : 31.4%), Fibrosis (n=41 : 23.0%), Fibroadenoma (n=20 : 11.2%), Epithelial hyperplasia (n=17 : 9.6%), Carcinoma (n=15 : 8.4%), Fibroadipose (n=8 : 4.5%), Sclerosing adenosis (n=7 : 3.9%), Duct ectasia (n=5 : 2.8%), Papiloma (n=5 : 2.8%), and Fat necrosis (n=1 : 0.6%), Hemangioma (n=1 : 0.6%), Abscess (n=1 : 0.6%), Dystrophic calcification(n=1 : 0.6%). The US showed that the results of the tissue biopsy of complex cystic masses were mostly carcinoma(8.4%). Most of them were benign and only 9.6% of epithelial hyperplasia which has high progression rate into malignant tumors epidemically showed malignancy. Most of them were included in the spectrum of fibrous cystic nodule. Even though these results are confirmed, further studies are required. As a result, a nodule which is not certified by US should be right to take the tissue biopsy, but if it's difficult due to patients or another reasons, re-check tests in three months are required. And systemic ultrasonography evaluation should be well recognized to conduct more careful and specific tests.

  1. The Study for Results of Complex Cystic Breast Masses by Biopsy on Ultrasound

    International Nuclear Information System (INIS)

    Kang, Hye Kyoung; Dong, Kyung Rae

    2008-01-01

    We examined the roles of Ultrasonography conductors by analyzing the results of tissue biopsy of complex cystic masse under the guidance of breast US. This study was performed to a group of 178 who showed breast US indicating complex cystic masses among 342 patients who were definitely diagnosed by tissue biopsies and operations in our hospital from June 30th, 2003 to June 30th, 2007. The evaluation of tissues around, calcification, the distribution state of blood flow were excluded from the analysis subjects and logic 200 made by GE corporation and gun for core biopsy(Kimal corp., K7/MBD23) were used in this study. The biopsy results of 178 subjects showed FCC (fibrocystic change)(n=56 : 31.4%), Fibrosis (n=41 : 23.0%), Fibroadenoma (n=20 : 11.2%), Epithelial hyperplasia (n=17 : 9.6%), Carcinoma (n=15 : 8.4%), Fibroadipose (n=8 : 4.5%), Sclerosing adenosis (n=7 : 3.9%), Duct ectasia (n=5 : 2.8%), Papiloma (n=5 : 2.8%), and Fat necrosis (n=1 : 0.6%), Hemangioma (n=1 : 0.6%), Abscess (n=1 : 0.6%), Dystrophic calcification(n=1 : 0.6%). The US showed that the results of the tissue biopsy of complex cystic masses were mostly carcinoma(8.4%). Most of them were benign and only 9.6% of epithelial hyperplasia which has high progression rate into malignant tumors epidemically showed malignancy. Most of them were included in the spectrum of fibrous cystic nodule. Even though these results are confirmed, further studies are required. As a result, a nodule which is not certified by US should be right to take the tissue biopsy, but if it's difficult due to patients or another reasons, re-check tests in three months are required. And systemic ultrasonography evaluation should be well recognized to conduct more careful and specific tests.

  2. Communication complexity and information complexity

    Science.gov (United States)

    Pankratov, Denis

    Information complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity. In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute exact communication complexity of several related functions on n-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form theta( n). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems. In the second contribution, we use self-reduction methods to prove strong lower bounds on the information

  3. Latest results of SEE measurements obtained by the STRURED demonstrator ASIC

    Energy Technology Data Exchange (ETDEWEB)

    Candelori, A. [INFN, Section of Padova, Via Marzolo 8, c.a.p. 35131, Padova (Italy); De Robertis, G. [INFN Section of Bari, Via Orabona 4, c.a.p. 70126, Bari (Italy); Gabrielli, A. [Physics Department, University of Bologna, Viale Berti Pichat 6/2, c.a.p. 40127, Bologna (Italy); Mattiazzo, S.; Pantano, D. [INFN, Section of Padova, Via Marzolo 8, c.a.p. 35131, Padova (Italy); Ranieri, A., E-mail: antonio.ranieri@ba.infn.i [INFN Section of Bari, Via Orabona 4, c.a.p. 70126, Bari (Italy); Tessaro, M. [INFN, Section of Padova, Via Marzolo 8, c.a.p. 35131, Padova (Italy)

    2011-01-21

    With the perspective to develop a radiation-tolerant circuit for High Energy Physics (HEP) applications, a test digital ASIC VLSI chip, called STRURED, has been designed and fabricated using a standard-cell library of commercial 130 nm CMOS technology by implementing three different radiation-tolerant architectures (Hamming, Triple Modular Redundancy and Triple Time Redundancy) in order to correct circuit malfunctions induced by the occurrence of Soft Errors (SEs). SEs are one of the main reasons of failures affecting electronic digital circuits operating in harsh radiation environments, such as in experiments performed at HEP colliders or in apparatus to be operated in space. In this paper we present and discuss the latest results of SE cross-section measurements performed using the STRURED digital device, exposed to high energy heavy ions at the SIRAD irradiation facility of the INFN National Laboratories of Legnaro (Padova, Italy). In particular the different behaviors of the input part and the core of the three radiation-tolerant architectures are analyzed in detail.

  4. Treatment of complex PTSD: results of the ISTSS expert clinician survey on best practices.

    Science.gov (United States)

    Cloitre, Marylene; Courtois, Christine A; Charuvastra, Anthony; Carapezza, Richard; Stolbach, Bradley C; Green, Bonnie L

    2011-12-01

    This study provides a summary of the results of an expert opinion survey initiated by the International Society for Traumatic Stress Studies Complex Trauma Task Force regarding best practices for the treatment of complex posttraumatic stress disorder (PTSD). Ratings from a mail-in survey from 25 complex PTSD experts and 25 classic PTSD experts regarding the most appropriate treatment approaches and interventions for complex PTSD were examined for areas of consensus and disagreement. Experts agreed on several aspects of treatment, with 84% endorsing a phase-based or sequenced therapy as the most appropriate treatment approach with interventions tailored to specific symptom sets. First-line interventions matched to specific symptoms included emotion regulation strategies, narration of trauma memory, cognitive restructuring, anxiety and stress management, and interpersonal skills. Meditation and mindfulness interventions were frequently identified as an effective second-line approach for emotional, attentional, and behavioral (e.g., aggression) disturbances. Agreement was not obtained on either the expected course of improvement or on duration of treatment. The survey results provide a strong rationale for conducting research focusing on the relative merits of traditional trauma-focused therapies and sequenced multicomponent approaches applied to different patient populations with a range of symptom profiles. Sustained symptom monitoring during the course of treatment and during extended follow-up would advance knowledge about both the speed and durability of treatment effects. Copyright © 2011 International Society for Traumatic Stress Studies.

  5. 3D-FBK Pixel sensors: recent beam tests results with irradiated devices

    CERN Document Server

    Micelli, A; Sandaker, H; Stugu, B; Barbero, M; Hugging, F; Karagounis, M; Kostyukhin, V; Kruger, H; Tsung, J W; Wermes, N; Capua, M; Fazio, S; Mastroberardino, A; Susinno, G; Gallrapp, C; Di Girolamo, B; Dobos, D; La Rosa, A; Pernegger, H; Roe, S; Slavicek, T; Pospisil, S; Jakobs, K; Kohler, M; Parzefall, U; Darbo, G; Gariano, G; Gemme, C; Rovani, A; Ruscino, E; Butter, C; Bates, R; Oshea, V; Parker, S; Cavalli-Sforza, M; Grinstein, S; Korokolov, I; Pradilla, C; Einsweiler, K; Garcia-Sciveres, M; Borri, M; Da Via, C; Freestone, J; Kolya, S; Lai, C H; Nellist, C; Pater, J; Thompson, R; Watts, S J; Hoeferkamp, M; Seidel, S; Bolle, E; Gjersdal, H; Sjobaek, K N; Stapnes, S; Rohne, O; Su, D; Young, C; Hansson, P; Grenier, P; Hasi, J; Kenney, C; Kocian, M; Jackson, P; Silverstein, D; Davetak, H; DeWilde, B; Tsybychev, D; Dalla Betta, G F; Gabos, P; Povoli, M; Cobal, M; Giordani, M P; Selmi, L; Cristofoli, A; Esseni, D; Palestri, P; Fleta, C; Lozano, M; Pellegrini, G; Boscardin, M; Bagolini, A; Piemonte, C; Ronchin, S; Zorzi, N; Hansen, T E; Hansen, T; Kok, A; Lietaer, N; Kalliopuska, J; Oja, A

    2011-01-01

    The Pixel detector is the innermost part of the ATLAS experiment tracking device at the Large Hadron Collider (LHC), and plays a key role in the reconstruction of the primary and secondary vertices of short-lived particles. To cope with the high level of radiation produced during the collider operation, it is planned to add to the present three layers of silicon pixel sensors which constitute the Pixel Detector, an additional layer (Insertable B-Layer, or IBL) of sensors. 3D silicon sensors are one of the technologies which are under study for the IBL. 3D silicon technology is an innovative combination of very-large-scale integration (VLSI) and Micro-Electro-Mechanical-Systems (MEMS) where electrodes are fabricated inside the silicon bulk instead of being implanted on the wafer surfaces. 3D sensors, with electrodes fully or partially penetrating the silicon substrate, are currently fabricated at different processing facilities in Europe and USA. This paper reports on the 2010 June beam test results for irradi...

  6. Simulation of worst-case operating conditions for integrated circuits operating in a total dose environment

    International Nuclear Information System (INIS)

    Bhuva, B.L.

    1987-01-01

    Degradations in the circuit performance created by the radiation exposure of integrated circuits are so unique and abnormal that thorough simulation and testing of VLSI circuits is almost impossible, and new ways to estimate the operating performance in a radiation environment must be developed. The principal goal of this work was the development of simulation techniques for radiation effects on semiconductor devices. The mixed-mode simulation approach proved to be the most promising. The switch-level approach is used to identify the failure mechanisms and critical subcircuits responsible for operational failure along with worst-case operating conditions during and after irradiation. For precise simulations of critical subcircuits, SPICE is used. The identification of failure mechanisms enables the circuit designer to improve the circuit's performance and failure-exposure level. Identification of worst-case operating conditions during and after irradiation reduces the complexity of testing VLSI circuits for radiation environments. The results of test circuits for failure simulations using a conventional simulator and the new simulator showed significant time savings using the new simulator. The savings in simulation time proved to be circuit topology-dependent. However, for large circuits, the simulation time proved to be orders of magnitude smaller than simulation time for conventional simulators

  7. Care complexity in the general hospital - Results from a European study

    NARCIS (Netherlands)

    de Jonge, P; Huyse, FJ; Slaets, JPJ; Herzog, T; Lobo, A; Lyons, JS; Opmeer, BC; Stein, B; Arolt, [No Value; Balogh, N; Cardoso, G; Fink, P; Rigatelli, M; van Dijck, R; Mellenbergh, GJ

    2001-01-01

    There is increasing pressure to effectively treat patients with complex care needs from the moment of admission to the general hospital. In this study, the authors developed a measurement strategy for hospital-based care complexity. The authors' four-factor model describes the interrelations between

  8. Frequency-dependent complex modulus of the uterus: preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Kiss, Miklos Z [Department of Medical Physics, University of Wisconsin, Madison, WI 53706 (United States); Hobson, Maritza A [Department of Medical Physics, University of Wisconsin, Madison, WI 53706 (United States); Varghese, Tomy [Department of Medical Physics, University of Wisconsin, Madison, WI 53706 (United States); Harter, Josephine [Department of Surgical Pathology, University of Wisconsin, Madison, WI 53706 (United States); Kliewer, Mark A [Department of Radiology, University of Wisconsin, Madison, WI 53706 (United States); Hartenbach, Ellen M [Department of Obstetrics and Gynecology, University of Wisconsin, Madison, WI 53706 (United States); Zagzebski, James A [Department of Medical Physics, University of Wisconsin, Madison, WI 53706 (United States)

    2006-08-07

    The frequency-dependent complex moduli of human uterine tissue have been characterized. Quantification of the modulus is required for developing uterine ultrasound elastography as a viable imaging modality for diagnosing and monitoring causes for abnormal uterine bleeding and enlargement, as well assessing the integrity of uterine and cervical tissue. The complex modulus was measured in samples from hysterectomies of 24 patients ranging in age from 31 to 79 years. Measurements were done under small compressions of either 1 or 2%, at low pre-compression values (either 1 or 2%), and over a frequency range of 0.1-100 Hz. Modulus values of cervical tissue monotonically increased from approximately 30-90 kPa over the frequency range. Normal uterine tissue possessed modulus values over the same range, while leiomyomas, or uterine fibroids, exhibited values ranging from approximately 60-220 kPa.

  9. Frequency-dependent complex modulus of the uterus: preliminary results

    International Nuclear Information System (INIS)

    Kiss, Miklos Z; Hobson, Maritza A; Varghese, Tomy; Harter, Josephine; Kliewer, Mark A; Hartenbach, Ellen M; Zagzebski, James A

    2006-01-01

    The frequency-dependent complex moduli of human uterine tissue have been characterized. Quantification of the modulus is required for developing uterine ultrasound elastography as a viable imaging modality for diagnosing and monitoring causes for abnormal uterine bleeding and enlargement, as well assessing the integrity of uterine and cervical tissue. The complex modulus was measured in samples from hysterectomies of 24 patients ranging in age from 31 to 79 years. Measurements were done under small compressions of either 1 or 2%, at low pre-compression values (either 1 or 2%), and over a frequency range of 0.1-100 Hz. Modulus values of cervical tissue monotonically increased from approximately 30-90 kPa over the frequency range. Normal uterine tissue possessed modulus values over the same range, while leiomyomas, or uterine fibroids, exhibited values ranging from approximately 60-220 kPa

  10. Systematic configuration and automatic tuning of neuromorphic systems

    OpenAIRE

    Sheik Sadique; Stefanini Fabio; Neftci Emre; Chicca Elisabetta; Indiveri Giacomo

    2011-01-01

    In the past recent years several research groups have proposed neuromorphic Very Large Scale Integration (VLSI) devices that implement event-based sensors or biophysically realistic networks of spiking neurons. It has been argued that these devices can be used to build event-based systems, for solving real-world applications in real-time, with efficiencies and robustness that cannot be achieved with conventional computing technologies. In order to implement complex event-based neuromorphic sy...

  11. RESULTS OF APPLYING POLYVITAMIN COMPLEX FOR CHILDREN WITH ATOPIC DERMATITIS

    Directory of Open Access Journals (Sweden)

    N.A. Ivanova

    2007-01-01

    Full Text Available The article presents findings of applying vitamin-and-mineral complex (VMC for children frequently suffering from diseases and children with atopic dermatitis. It shows that usage of VMC within a complex therapy promotes regression of subnormal vitamin provision symptoms, as well as symptoms of the core disease. This happens against heightened vitamin content in child's organism — which was proven with the test of A and E vitamins content in blood. The research has demonstrated a quite good tolerance of VMC by children suffering from atopic dermatitis.Key words: children frequently suffering from diseases, atopic dermatitis, vitamins, treatment.

  12. VLSI Design Tools, Reference Manual, Release 2.0.

    Science.gov (United States)

    1984-08-01

    eder. 2.3 ITACV: Libary ofC readne. far oesumdg a layoit 1-,, tiling. V ~2.4 "QUILT: CeinS"Wbesa-i-M-8euar ray f atwok til 2.5 "TIL: Tockmeleff...8217patterns package was added so that complex and repetitive digital waveforms could be generated far more easily. The recently written program MTP (Multiple...circuit model to estimate timing delays through digital circuits. It also has a mode that allows it to be used as a switch (gate) level simulator

  13. Purification of Ovine Respiratory Complex I Results in a Highly Active and Stable Preparation*

    Science.gov (United States)

    Letts, James A.; Degliesposti, Gianluca; Fiedorczuk, Karol; Skehel, Mark; Sazanov, Leonid A.

    2016-01-01

    NADH-ubiquinone oxidoreductase (complex I) is the largest (∼1 MDa) and the least characterized complex of the mitochondrial electron transport chain. Because of the ease of sample availability, previous work has focused almost exclusively on bovine complex I. However, only medium resolution structural analyses of this complex have been reported. Working with other mammalian complex I homologues is a potential approach for overcoming these limitations. Due to the inherent difficulty of expressing large membrane protein complexes, screening of complex I homologues is limited to large mammals reared for human consumption. The high sequence identity among these available sources may preclude the benefits of screening. Here, we report the characterization of complex I purified from Ovis aries (ovine) heart mitochondria. All 44 unique subunits of the intact complex were identified by mass spectrometry. We identified differences in the subunit composition of subcomplexes of ovine complex I as compared with bovine, suggesting differential stability of inter-subunit interactions within the complex. Furthermore, the 42-kDa subunit, which is easily lost from the bovine enzyme, remains tightly bound to ovine complex I. Additionally, we developed a novel purification protocol for highly active and stable mitochondrial complex I using the branched-chain detergent lauryl maltose neopentyl glycol. Our data demonstrate that, although closely related, significant differences exist between the biochemical properties of complex I prepared from ovine and bovine mitochondria and that ovine complex I represents a suitable alternative target for further structural studies. PMID:27672209

  14. The effects of advanced digital signal processing concepts on VLSIC/VHSIC design

    Science.gov (United States)

    Jankowski, C.

    Implementations of sophisticated mathematical techniques in advanced digital signal processors can significantly improve performance. Future VLSI and VHSI circuit designs must include the practical realization of these algorithms. A structured design approach is described and illustrated with examples from a RNS FIR filter processor development project. The CAE hardware and software required to support tasks of this complexity are also discussed. An EWS is recommended for controlling essential functions such as logic optimization, simulation and verification. The total IC design system is illustrated with the implementation of a new high performance algorithm for computing complex magnitude.

  15. Purification of Ovine Respiratory Complex I Results in a Highly Active and Stable Preparation.

    Science.gov (United States)

    Letts, James A; Degliesposti, Gianluca; Fiedorczuk, Karol; Skehel, Mark; Sazanov, Leonid A

    2016-11-18

    NADH-ubiquinone oxidoreductase (complex I) is the largest (∼1 MDa) and the least characterized complex of the mitochondrial electron transport chain. Because of the ease of sample availability, previous work has focused almost exclusively on bovine complex I. However, only medium resolution structural analyses of this complex have been reported. Working with other mammalian complex I homologues is a potential approach for overcoming these limitations. Due to the inherent difficulty of expressing large membrane protein complexes, screening of complex I homologues is limited to large mammals reared for human consumption. The high sequence identity among these available sources may preclude the benefits of screening. Here, we report the characterization of complex I purified from Ovis aries (ovine) heart mitochondria. All 44 unique subunits of the intact complex were identified by mass spectrometry. We identified differences in the subunit composition of subcomplexes of ovine complex I as compared with bovine, suggesting differential stability of inter-subunit interactions within the complex. Furthermore, the 42-kDa subunit, which is easily lost from the bovine enzyme, remains tightly bound to ovine complex I. Additionally, we developed a novel purification protocol for highly active and stable mitochondrial complex I using the branched-chain detergent lauryl maltose neopentyl glycol. Our data demonstrate that, although closely related, significant differences exist between the biochemical properties of complex I prepared from ovine and bovine mitochondria and that ovine complex I represents a suitable alternative target for further structural studies. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  16. Long-Term Results After Simple Versus Complex Stenting of Coronary Artery Bifurcation Lesions Nordic Bifurcation Study 5-Year Follow-Up Results

    DEFF Research Database (Denmark)

    Maeng, M.; Holm, N. R.; Erglis, A.

    2013-01-01

    Objectives This study sought to report the 5-year follow-up results of the Nordic Bifurcation Study. Background Randomized clinical trials with short-term follow-up have indicated that coronary bifurcation lesions may be optimally treated using the optional side branch stenting strategy. Methods...... complex strategy of planned stenting of both the main vessel and the side branch. (C) 2013 by the American College of Cardiology Foundation...

  17. A complex-plane strategy for computing rotating polytropic models - Numerical results for strong and rapid differential rotation

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1990-01-01

    In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs

  18. Architectural optimizations for low-power K-best MIMO decoders

    KAUST Repository

    Mondal, Sudip

    2009-09-01

    Maximum-likelihood (ML) detection for higher order multiple-input-multiple-output (MIMO) systems faces a major challenge in computational complexity. This limits the practicality of these systems from an implementation point of view, particularly for mobile battery-operated devices. In this paper, we propose a modified approach for MIMO detection, which takes advantage of the quadratic-amplitude modulation (QAM) constellation structure to accelerate the detection procedure. This approach achieves low-power operation by extending the minimum number of paths and reducing the number of required computations for each path extension, which results in an order-of-magnitude reduction in computations in comparison with existing algorithms. This paper also describes the very-large-scale integration (VLSI) design of the low-power path metric computation unit. The approach is applied to a 4 × 4, 64-QAM MIMO detector system. Results show negligible performance degradation compared with conventional algorithms while reducing the complexity by more than 50%. © 2009 IEEE.

  19. Architectural optimizations for low-power K-best MIMO decoders

    KAUST Repository

    Mondal, Sudip; Eltawil, Ahmed M.; Salama, Khaled N.

    2009-01-01

    Maximum-likelihood (ML) detection for higher order multiple-input-multiple-output (MIMO) systems faces a major challenge in computational complexity. This limits the practicality of these systems from an implementation point of view, particularly for mobile battery-operated devices. In this paper, we propose a modified approach for MIMO detection, which takes advantage of the quadratic-amplitude modulation (QAM) constellation structure to accelerate the detection procedure. This approach achieves low-power operation by extending the minimum number of paths and reducing the number of required computations for each path extension, which results in an order-of-magnitude reduction in computations in comparison with existing algorithms. This paper also describes the very-large-scale integration (VLSI) design of the low-power path metric computation unit. The approach is applied to a 4 × 4, 64-QAM MIMO detector system. Results show negligible performance degradation compared with conventional algorithms while reducing the complexity by more than 50%. © 2009 IEEE.

  20. A neural network device for on-line particle identification in cosmic ray experiments

    International Nuclear Information System (INIS)

    Scrimaglio, R.; Finetti, N.; D'Altorio, L.; Rantucci, E.; Raso, M.; Segreto, E.; Tassoni, A.; Cardarilli, G.C.

    2004-01-01

    On-line particle identification is one of the main goals of many experiments in space both for rare event studies and for optimizing measurements along the orbital trajectory. Neural networks can be a useful tool for signal processing and real time data analysis in such experiments. In this document we report on the performances of a programmable neural device which was developed in VLSI analog/digital technology. Neurons and synapses were accomplished by making use of Operational Transconductance Amplifier (OTA) structures. In this paper we report on the results of measurements performed in order to verify the agreement of the characteristic curves of each elementary cell with simulations and on the device performances obtained by implementing simple neural structures on the VLSI chip. A feed-forward neural network (Multi-Layer Perceptron, MLP) was implemented on the VLSI chip and trained to identify particles by processing the signals of two-dimensional position-sensitive Si detectors. The radiation monitoring device consisted of three double-sided silicon strip detectors. From the analysis of a set of simulated data it was found that the MLP implemented on the neural device gave results comparable with those obtained with the standard method of analysis confirming that the implemented neural network could be employed for real time particle identification

  1. Daily radiotoxicological supervision of personnel at the Pierrelatte industrial complex. Methods and results

    International Nuclear Information System (INIS)

    Chalabreysse, Jacques.

    1978-05-01

    A 13 year experience gained from daily radiotoxicological supervision of personnel at the PIERRELATTE industrial complex is presented. This study is divided into two parts: part one is theoretical: bibliographical synthesis of all scattered documents and publications; a homogeneous survey of all literature on the subject is thus available. Part two reviews the experience gained in professional surroundings: laboratory measurements and analyses (development of methods and daily applications); mathematical formulae to answer the first questions which arise before an individual liable to be contaminated; results obtained at PIERRELATTE [fr

  2. A novel sorting algorithm and its application to a gamma-ray telescope asynchronous data acquisition system

    International Nuclear Information System (INIS)

    Colavita, A.; Capello, G.

    1997-01-01

    In this paper we present a novel parallel sorting algorithm, which works through a cascade of elementary sorting units and leads to a scalable architecture. The algorithm's complexity is analyzed and compared with a classical parallel algorithm. It comes out that, although it may be less efficient than classical approaches, the proposed algorithm is highly suited for VLSI implementation for its simplicity and scalability. The paper describes the applications of such device to the asynchronous data acquisition for a gamma-ray telescope. (orig.)

  3. K-best decoders for 5G+ wireless communication

    CERN Document Server

    Rahman, Mehnaz

    2017-01-01

    This book discusses new, efficient and hardware realizable algorithms that can attain the performance of beyond 5G wireless communication. The authors explain topics gradually, stepping from basic MIMO detection to optimized schemes for both hard and soft domain MIMO detection and also to the feasible VLSI implementation, scalable to any MIMO configuration (including massive MIMO, used in satellite/space communication). The techniques described in this book enable readers to implement real designs, with reduced computational complexity and improved performance.

  4. On the use of Empirical Data to Downscale Non-scientific Scepticism About Results From Complex Physical Based Models

    Science.gov (United States)

    Germer, S.; Bens, O.; Hüttl, R. F.

    2008-12-01

    The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.

  5. New developments in double sided silicon strip detectors

    International Nuclear Information System (INIS)

    Becker, H.; Boulos, T.; Cattaneo, P.; Dietl, H.; Hauff, D.; Holl, P.; Lange, E.; Lutz, G.; Moser, H.G.; Schwarz, A.S.; Settles, R.; Struder, L.; Kemmer, J.; Buttler, W.

    1990-01-01

    A new type of double sided silicon strip detector has been built and tested using highly density VLSI readout electronics connected to both sides. Capacitive coupling of the strips to the readout electronics has been achieved by integrating the capacitors into the detector design, which was made possible by introducing a new detector biasing concept. Schemes to simplify the technology of the fabrication of the detectors are discussed. The static performance properties of the devices as well as implications of the use of VLSI electronics in their readout are described. Prototype detectors of the described design equipped with high density readout electronics have been installed in the ALEPH detector at LEP. Test results on the performance are given

  6. Complex conductivity results to silver nanoparticles in partically saturated laboratory columns

    Data.gov (United States)

    U.S. Environmental Protection Agency — Laboratory complex conductivity data from partially saturated sand columns with silver nanoparticles. This dataset is not publicly accessible because: It involves...

  7. VLSI Research

    Science.gov (United States)

    1984-04-01

    Interpretation of IMMEDIATE fields of instructions (except ldhi ): W (c) (d) (e) sssssssssssss s imml9 sssssssssssssssssss...s imml3 Destination REGISTER of a LDHI instruction: imml9 0000000000000 Data in REGISTERS when operated upon: 32-bit quantity...Oll x l OOOO OOOl calli sll OOlO getpsw sra xxzOOll getlpc srl OlOO putpsw ldhi OlOl and zzzOllO or ldxw stxw Olll xor

  8. Universal programmable logic gate and routing method

    Science.gov (United States)

    Fijany, Amir (Inventor); Vatan, Farrokh (Inventor); Akarvardar, Kerem (Inventor); Blalock, Benjamin (Inventor); Chen, Suheng (Inventor); Cristoloveanu, Sorin (Inventor); Kolawa, Elzbieta (Inventor); Mojarradi, Mohammad M. (Inventor); Toomarian, Nikzad (Inventor)

    2009-01-01

    An universal and programmable logic gate based on G.sup.4-FET technology is disclosed, leading to the design of more efficient logic circuits. A new full adder design based on the G.sup.4-FET is also presented. The G.sup.4-FET can also function as a unique router device offering coplanar crossing of signal paths that are isolated and perpendicular to one another. This has the potential of overcoming major limitations in VLSI design where complex interconnection schemes have become increasingly problematic.

  9. Design automation, languages, and simulations

    CERN Document Server

    Chen, Wai-Kai

    2003-01-01

    As the complexity of electronic systems continues to increase, the micro-electronic industry depends upon automation and simulations to adapt quickly to market changes and new technologies. Compiled from chapters contributed to CRC's best-selling VLSI Handbook, this volume covers a broad range of topics relevant to design automation, languages, and simulations. These include a collaborative framework that coordinates distributed design activities through the Internet, an overview of the Verilog hardware description language and its use in a design environment, hardware/software co-design, syst

  10. Imbalance aware lithography hotspot detection: a deep learning approach

    Science.gov (United States)

    Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei

    2017-07-01

    With the advancement of very large scale integrated circuits (VLSI) technology nodes, lithographic hotspots become a serious problem that affects manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with the extreme scaling of transistor feature size and layout patterns growing in complexity, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. We present a deep convolutional neural network (CNN) that targets representative feature learning in lithography hotspot detection. We carefully analyze the impact and effectiveness of different CNN hyperparameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always in the minority in VLSI mask design, the training dataset is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from a high number of false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply hotspot upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.

  11. Real-time FPGA architectures for computer vision

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar

    2000-03-01

    This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.

  12. Possible applications of the sigma delta digitizer in particle physics

    International Nuclear Information System (INIS)

    Hallgren, B.

    1991-01-01

    The sigma delta (ΣΔ) principle is an analog-to-digital conversion technique based on high-frequency sampling and low-pass filtering of the quantization noise. Resolution in time is exchanged for that in amplitude so as to avoid the difficulty of implementing complex precision analog circuits, in favour of digital circuits. The approach is attractive because it will make it possible to integrate complete channels of high resolution analog-to-digital converters and time digitizers in submicron digital VLSI technologies. Advantage is taken of the fact that the state-of-the-art VLSI is better suited for providing fast digital circuits than for providing precise analog circuits. This article describes the principle and the performance of the ideal ΣΔ digitizer. The design and measurements of a new 10 MHz prototype circuit of a second-order ΣΔ is presented to show the high speed operation of such a circuit. The expected performance of a CMOS test design using the same principles is discussed. Digital filters, useful for particle physics, are introduced. A comparison to other digitizing techniques is made and the potential applications of the ΣΔ digitizer in particle physics are outlined. (orig.)

  13. Smart Sensors: Why and when the origin was and why and where the future will be

    Science.gov (United States)

    Corsi, C.

    2013-12-01

    Smart Sensors is a technique developed in the 70's when the processing capabilities, based on readout integrated with signal processing, was still far from the complexity needed in advanced IR surveillance and warning systems, because of the enormous amount of noise/unwanted signals emitted by operating scenario especially in military applications. The Smart Sensors technology was kept restricted within a close military environment exploding in applications and performances in the 90's years thanks to the impressive improvements in the integrated signal read-out and processing achieved by CCD-CMOS technologies in FPA. In fact the rapid advances of "very large scale integration" (VLSI) processor technology and mosaic EO detector array technology allowed to develop new generations of Smart Sensors with much improved signal processing by integrating microcomputers and other VLSI signal processors. inside the sensor structure achieving some basic functions of living eyes (dynamic stare, non-uniformity compensation, spatial and temporal filtering). New and future technologies (Nanotechnology, Bio-Organic Electronics, Bio-Computing) are lightning a new generation of Smart Sensors extending the Smartness from the Space-Time Domain to Spectroscopic Functional Multi-Domain Signal Processing. History and future forecasting of Smart Sensors will be reported.

  14. Adaptive Backoff Synchronization Techniques

    Science.gov (United States)

    1989-07-01

    Percentage of synchronization and non- synchronisation references that cause invalidations in directory schemes with 2, 3, 4, 5, and 64 pointers...processors to arrive. The slight relative increase of synchronisation overhead in all cases when going from two to five pointers is because synchronization ...MASSACHUSETTS INSTITUTE OF TECHNOLOGY VLSI PUBLICATIONS q~JU VLSI Memo No. 89-547 It July 1989 Adaptive Backoff Synchronization Techniques Anant

  15. Latest Results on Complex Plasmas with the PK-3 Plus Laboratory on Board the International Space Station

    Science.gov (United States)

    Schwabe, M.; Du, C.-R.; Huber, P.; Lipaev, A. M.; Molotkov, V. I.; Naumkin, V. N.; Zhdanov, S. K.; Zhukhovitskii, D. I.; Fortov, V. E.; Thomas, H. M.

    2018-03-01

    Complex plasmas are low temperature plasmas that contain microparticles in addition to ions, electrons, and neutral particles. The microparticles acquire high charges, interact with each other and can be considered as model particles for effects in classical condensed matter systems, such as crystallization and fluid dynamics. In contrast to atoms in ordinary systems, their movement can be traced on the most basic level, that of individual particles. In order to avoid disturbances caused by gravity, experiments on complex plasmas are often performed under microgravity conditions. The PK-3 Plus Laboratory was operated on board the International Space Station from 2006 - 2013. Its heart consisted of a capacitively coupled radio-frequency plasma chamber. Microparticles were inserted into the low-temperature plasma, forming large, homogeneous complex plasma clouds. Here, we review the results obtained with recent analyzes of PK-3 Plus data: We study the formation of crystallization fronts, as well as the microparticle motion in, and structure of crystalline complex plasmas. We investigate fluid effects such as wave transmission across an interface, and the development of the energy spectra during the onset of turbulent microparticle movement. We explore how abnormal particles move through, and how macroscopic spheres interact with the microparticle cloud. These examples demonstrate the versatility of the PK-3 Plus Laboratory.

  16. Application of Semiempirical Methods to Transition Metal Complexes: Fast Results but Hard-to-Predict Accuracy.

    KAUST Repository

    Minenkov, Yury

    2018-05-22

    A series of semiempirical PM6* and PM7 methods has been tested in reproducing of relative conformational energies of 27 realistic-size complexes of 16 different transition metals (TMs). An analysis of relative energies derived from single-point energy evaluations on density functional theory (DFT) optimized conformers revealed pronounced deviations between semiempirical and DFT methods indicating fundamental difference in potential energy surfaces (PES). To identify the origin of the deviation, we compared fully optimized PM7 and respective DFT conformers. For many complexes, differences in PM7 and DFT conformational energies have been confirmed often manifesting themselves in false coordination of some atoms (H, O) to TMs and chemical transformations/distortion of coordination center geometry in PM7 structures. Despite geometry optimization with fixed coordination center geometry leads to some improvements in conformational energies, the resulting accuracy is still too low to recommend explored semiempirical methods for out-of-the-box conformational search/sampling: careful testing is always needed.

  17. Fast Discrete Fourier Transform Computations Using the Reduced Adder Graph Technique

    Directory of Open Access Journals (Sweden)

    Andrew G. Dempster

    2007-01-01

    Full Text Available It has recently been shown that the n-dimensional reduced adder graph (RAG-n technique is beneficial for many DSP applications such as for FIR and IIR filters, where multipliers can be grouped in multiplier blocks. This paper highlights the importance of DFT and FFT as DSP objects and also explores how the RAG-n technique can be applied to these algorithms. This RAG-n DFT will be shown to be of low complexity and possess an attractively regular VLSI data flow when implemented with the Rader DFT algorithm or the Bluestein chirp-z algorithm. ASIC synthesis data are provided and demonstrate the low complexity and high speed of the design when compared to other alternatives.

  18. Fast Discrete Fourier Transform Computations Using the Reduced Adder Graph Technique

    Directory of Open Access Journals (Sweden)

    Dempster Andrew G

    2007-01-01

    Full Text Available It has recently been shown that the -dimensional reduced adder graph (RAG- technique is beneficial for many DSP applications such as for FIR and IIR filters, where multipliers can be grouped in multiplier blocks. This paper highlights the importance of DFT and FFT as DSP objects and also explores how the RAG- technique can be applied to these algorithms. This RAG- DFT will be shown to be of low complexity and possess an attractively regular VLSI data flow when implemented with the Rader DFT algorithm or the Bluestein chirp- algorithm. ASIC synthesis data are provided and demonstrate the low complexity and high speed of the design when compared to other alternatives.

  19. Analog-to-digital conversion

    CERN Document Server

    Pelgrom, Marcel J M

    2010-01-01

    The design of an analog-to-digital converter or digital-to-analog converter is one of the most fascinating tasks in micro-electronics. In a converter the analog world with all its intricacies meets the realm of the formal digital abstraction. Both disciplines must be understood for an optimum conversion solution. In a converter also system challenges meet technology opportunities. Modern systems rely on analog-to-digital converters as an essential part of the complex chain to access the physical world. And processors need the ultimate performance of digital-to-analog converters to present the results of their complex algorithms. The same progress in CMOS technology that enables these VLSI digital systems creates new challenges for analog-to-digital converters: lower signal swings, less power and variability issues. Last but not least, the analog-to-digital converter must follow the cost reduction trend. These changing boundary conditions require micro-electronics engineers to consider their design choices for...

  20. A high performance cost-effective digital complex correlator for an X-band polarimetry survey.

    Science.gov (United States)

    Bergano, Miguel; Rocha, Armando; Cupido, Luís; Barbosa, Domingos; Villela, Thyrso; Boas, José Vilas; Rocha, Graça; Smoot, George F

    2016-01-01

    The detailed knowledge of the Milky Way radio emission is important to characterize galactic foregrounds masking extragalactic and cosmological signals. The update of the global sky models describing radio emissions over a very large spectral band requires high sensitivity experiments capable of observing large sky areas with long integration times. Here, we present the design of a new 10 GHz (X-band) polarimeter digital back-end to map the polarization components of the galactic synchrotron radiation field of the Northern Hemisphere sky. The design follows the digital processing trends in radio astronomy and implements a large bandwidth (1 GHz) digital complex cross-correlator to extract the Stokes parameters of the incoming synchrotron radiation field. The hardware constraints cover the implemented VLSI hardware description language code and the preliminary results. The implementation is based on the simultaneous digitized acquisition of the Cartesian components of the two linear receiver polarization channels. The design strategy involves a double data rate acquisition of the ADC interleaved parallel bus, and field programmable gate array device programming at the register transfer mode. The digital core of the back-end is capable of processing 32 Gbps and is built around an Altera field programmable gate array clocked at 250 MHz, 1 GSps analog to digital converters and a clock generator. The control of the field programmable gate array internal signal delays and a convenient use of its phase locked loops provide the timing requirements to achieve the target bandwidths and sensitivity. This solution is convenient for radio astronomy experiments requiring large bandwidth, high functionality, high volume availability and low cost. Of particular interest, this correlator was developed for the Galactic Emission Mapping project and is suitable for large sky area polarization continuum surveys. The solutions may also be adapted to be used at signal processing

  1. Neuromorphic VLSI vision system for real-time texture segregation.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2008-10-01

    The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.

  2. A one-semester course in modeling of VSLI interconnections

    CERN Document Server

    Goel, Ashok

    2015-01-01

    Quantitative understanding of the parasitic capacitances and inductances, and the resultant propagation delays and crosstalk phenomena associated with the metallic interconnections on the very large scale integrated (VLSI) circuits has become extremely important for the optimum design of the state-of-the-art integrated circuits. More than 65 percent of the delays on the integrated circuit chip occur in the interconnections and not in the transistors on the chip. Mathematical techniques to model the parasitic capacitances, inductances, propagation delays, crosstalk noise, and electromigration-induced failure associated with the interconnections in the realistic high-density environment on a chip will be discussed. A One-Semester Course in Modeling of VLSI Interconnections also includes an overview of the future interconnection technologies for the nanotechnology circuits.

  3. Results of complex annual parasitological monitoring in the coastal area of Kola Bay

    Science.gov (United States)

    Kuklin, V. V.; Kuklina, M. M.; Kisova, N. E.; Maslich, M. A.

    2009-12-01

    The results of annual parasitological monitoring in the coastal area near the Abram-mys (Kola Bay, Barents Sea) are presented. The studies were performed in 2006-2007 and included complex examination of the intermediate hosts (mollusks and crustaceans) and definitive hosts (marine fish and birds) of the helminths. The biodiversity of the parasite fauna, seasonal dynamics, and functioning patterns of the parasite systems were investigated. The basic regularities in parasite circulation were assessed in relation to their life cycle strategies and the ecological features of the intermediate and definitive hosts. The factors affecting the success of parasite circulation in the coastal ecosystems were revealed through analysis of parasite biodiversity and abundance dynamics.

  4. Techniques for computing the discrete Fourier transform using the quadratic residue Fermat number systems

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.

    1986-01-01

    The complex integer multiplier and adder over the direct sum of two copies of finite field developed by Cozzens and Finkelstein (1985) is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplication over the rings of integers modulo Fermat numbers can be performed by means of two integer multiplications, whereas the complex integer multiplication requires three integer multiplications. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed to compute a systolic array of the DFT can be reduced substantially. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  5. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    Science.gov (United States)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run

  6. Percutaneous debridement of complex pyogenic liver abscesses: technique and results

    International Nuclear Information System (INIS)

    Morettin, L.B.

    1992-01-01

    The author's approach and technique in the treatment of complex liver abscesses that persisted or recurred following percutaneous drainage are described. Six patients were treated by percutaneous debridement utilizing an instrument specifically constructed for that purpose. Four patients were chronically ill but stable. Two patients were septic, hypotensive and considered life threatened. All patients had primary pyogenic abscesses. Four had demonstrated mixed bacterial flora consisting of E. coli, Klebsiella, Proteus and gram-positive cocci and two were caused by E. coli only. In all cases a contrast-enhanced CT of the abdomen revealed multiloculated or septated abscesses containing large central debris and peripheral shell or halo of compromised hepatic parenchyma. Debridement was successful in all cases resulting in complete healing in 4-12 days. Follow-up for periods of between 1 and 4.5 years revealed no recurrences. Three cases of infected tumors of the liver were referred for treatment. CT findings in these cases demonstrated a well-developed external capsule and internal septations and the absence of a surrounding halo of compromised parenchyma distinguishes them from primary abscesses. This preliminary experience allows the conclusion that percutaneous debridement of pyogenic liver abscesses can be safely performed, can be curative in selected patients with chronic abscesses and may be life-safing in critically ill and life-threatened patients. (orig.)

  7. Selective Cerebro-Myocardial Perfusion in Complex Neonatal Aortic Arch Pathology: Midterm Results.

    Science.gov (United States)

    Hoxha, Stiljan; Abbasciano, Riccardo Giuseppe; Sandrini, Camilla; Rossetti, Lucia; Menon, Tiziano; Barozzi, Luca; Linardi, Daniele; Rungatscher, Alessio; Faggian, Giuseppe; Luciani, Giovanni Battista

    2018-04-01

    Aortic arch repair in newborns and infants has traditionally been accomplished using a period of deep hypothermic circulatory arrest. To reduce neurologic and cardiac dysfunction related to circulatory arrest and myocardial ischemia during complex aortic arch surgery, an alternative and novel strategy for cerebro-myocardial protection was recently developed, where regional low-flow perfusion is combined with controlled and independent coronary perfusion. The aim of the present retrospective study was to assess short-term and mid-term results of selective and independent cerebro-myocardial perfusion in neonatal aortic arch surgery. From April 2008 to August 2015, 28 consecutive neonates underwent aortic arch surgery under cerebro-myocardial perfusion. There were 17 male and 11 female, with median age of 15 days (3-30 days) and median body weight of 3 kg (1.6-4.2 kg), 9 (32%) of whom with low body weight (cerebro-myocardial perfusion was 30 ± 11 min (15-69 min). Renal dysfunction, requiring a period of peritoneal dialysis was observed in 10 (36%) patients, while liver dysfunction was noted only in 3 (11%). There were three (11%) early and two late deaths during a median follow-up of 2.9 years (range 6 months-7.7 years), with an actuarial survival of 82% at 7 years. At latest follow-up, no patient showed signs of cardiac or neurologic dysfunction. The present experience shows that a strategy of selective and independent cerebro-myocardial perfusion is safe, versatile, and feasible in high-risk neonates with complex congenital arch pathology. Encouraging outcomes were noted in terms of cardiac and neurological function, with limited end-organ morbidity. © 2018 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  8. The results of complex radiation-hygienic survey of the reference settlements in Mogilev region

    International Nuclear Information System (INIS)

    Ageeva, T.N.; Chegerova, T.I.; Shchur, A.V.; Shapsheeva, T.P.; Lipnitskij, L.V.

    2011-01-01

    The results of complex radiation-hygienic survey of the reference settlements located on the radioactively contaminated territory have been presents in the article. The four-year dynamics of the internal exposure doses of the reference settlements' inhabitants and their relationship with the 137 Cs content in foods consumed by the population have been shown. It was ascertained that there are still some isolated individuals with high doses of internal radiation among the surveyed population, which have the significant influence on the average annual radiation dose for the inhabitants and dose of its critical group. The external exposure individual doses of the inhabitants and the results of measuring of the gamma radiation dose rate in place of the settlements have been analyzed. It have been expressed the opinion about need entering adjustment in the measuring techniques of external doses. (authors)

  9. Implantable neurotechnologies: bidirectional neural interfaces--applications and VLSI circuit implementations.

    Science.gov (United States)

    Greenwald, Elliot; Masters, Matthew R; Thakor, Nitish V

    2016-01-01

    A bidirectional neural interface is a device that transfers information into and out of the nervous system. This class of devices has potential to improve treatment and therapy in several patient populations. Progress in very large-scale integration has advanced the design of complex integrated circuits. System-on-chip devices are capable of recording neural electrical activity and altering natural activity with electrical stimulation. Often, these devices include wireless powering and telemetry functions. This review presents the state of the art of bidirectional circuits as applied to neuroprosthetic, neurorepair, and neurotherapeutic systems.

  10. CT demonstration of chicken trachea resulting from complete cartilaginous rings of the trachea in ring-sling complex

    International Nuclear Information System (INIS)

    Calcagni, Giulio; Bonnet, Damien; Sidi, Daniel; Brunelle, Francis; Vouhe, Pascal; Ou, Phalla

    2008-01-01

    We report a 10-month-old infant who presented with tetralogy of Fallot and respiratory disease in whom the suspicion of a ring-sling complex was confirmed by high-resolution CT. CT demonstrated the typical association of left pulmonary artery sling and the ''chicken trachea'' resulting from complete cartilaginous rings of the trachea. (orig.)

  11. CT demonstration of chicken trachea resulting from complete cartilaginous rings of the trachea in ring-sling complex

    Energy Technology Data Exchange (ETDEWEB)

    Calcagni, Giulio; Bonnet, Damien; Sidi, Daniel [University Paris Descartes, Department of Paediatric Cardiology, Hopital Necker-Enfants Malades, AP-HP, Paris (France); Brunelle, Francis [University Paris Descartes, Department of Paediatric Radiology, Hopital Necker-Enfants Malades, AP-HP, Paris Cedex 15 (France); Vouhe, Pascal [University Paris Descartes, Department of Paediatric Cardiovascular Surgery, Hopital Necker-Enfants Malades, AP-HP, Paris (France); Ou, Phalla [University Paris Descartes, Department of Paediatric Cardiology, Hopital Necker-Enfants Malades, AP-HP, Paris (France); University Paris Descartes, Department of Paediatric Radiology, Hopital Necker-Enfants Malades, AP-HP, Paris Cedex 15 (France)

    2008-07-15

    We report a 10-month-old infant who presented with tetralogy of Fallot and respiratory disease in whom the suspicion of a ring-sling complex was confirmed by high-resolution CT. CT demonstrated the typical association of left pulmonary artery sling and the ''chicken trachea'' resulting from complete cartilaginous rings of the trachea. (orig.)

  12. Transition Complexity of Incomplete DFAs

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2010-08-01

    Full Text Available In this paper, we consider the transition complexity of regular languages based on the incomplete deterministic finite automata. A number of results on Boolean operations have been obtained. It is shown that the transition complexity results for union and complementation are very different from the state complexity results for the same operations. However, for intersection, the transition complexity result is similar to that of state complexity.

  13. A Digital Liquid State Machine With Biologically Inspired Learning and Its Application to Speech Recognition.

    Science.gov (United States)

    Zhang, Yong; Li, Peng; Jin, Yingyezhe; Choe, Yoonsuck

    2015-11-01

    This paper presents a bioinspired digital liquid-state machine (LSM) for low-power very-large-scale-integration (VLSI)-based machine learning applications. To the best of the authors' knowledge, this is the first work that employs a bioinspired spike-based learning algorithm for the LSM. With the proposed online learning, the LSM extracts information from input patterns on the fly without needing intermediate data storage as required in offline learning methods such as ridge regression. The proposed learning rule is local such that each synaptic weight update is based only upon the firing activities of the corresponding presynaptic and postsynaptic neurons without incurring global communications across the neural network. Compared with the backpropagation-based learning, the locality of computation in the proposed approach lends itself to efficient parallel VLSI implementation. We use subsets of the TI46 speech corpus to benchmark the bioinspired digital LSM. To reduce the complexity of the spiking neural network model without performance degradation for speech recognition, we study the impacts of synaptic models on the fading memory of the reservoir and hence the network performance. Moreover, we examine the tradeoffs between synaptic weight resolution, reservoir size, and recognition performance and present techniques to further reduce the overhead of hardware implementation. Our simulation results show that in terms of isolated word recognition evaluated using the TI46 speech corpus, the proposed digital LSM rivals the state-of-the-art hidden Markov-model-based recognizer Sphinx-4 and outperforms all other reported recognizers including the ones that are based upon the LSM or neural networks.

  14. Some Results on the Graph Theory for Complex Neutrosophic Sets

    Directory of Open Access Journals (Sweden)

    Shio Gai Quek

    2018-05-01

    Full Text Available Fuzzy graph theory plays an important role in the study of the symmetry and asymmetry properties of fuzzy graphs. With this in mind, in this paper, we introduce new neutrosophic graphs called complex neutrosophic graphs of type 1 (abbr. CNG1. We then present a matrix representation for it and study some properties of this new concept. The concept of CNG1 is an extension of the generalized fuzzy graphs of type 1 (GFG1 and generalized single-valued neutrosophic graphs of type 1 (GSVNG1. The utility of the CNG1 introduced here are applied to a multi-attribute decision making problem related to Internet server selection.

  15. Design and implementation of interface units for high speed fiber optics local area networks and broadband integrated services digital networks

    Science.gov (United States)

    Tobagi, Fouad A.; Dalgic, Ismail; Pang, Joseph

    1990-01-01

    The design and implementation of interface units for high speed Fiber Optic Local Area Networks and Broadband Integrated Services Digital Networks are discussed. During the last years, a number of network adapters that are designed to support high speed communications have emerged. This approach to the design of a high speed network interface unit was to implement package processing functions in hardware, using VLSI technology. The VLSI hardware implementation of a buffer management unit, which is required in such architectures, is described.

  16. Transformational VLSI Design

    DEFF Research Database (Denmark)

    Rasmussen, Ole Steen

    constructed. It contains a semantical embedding of Ruby in Zermelo-Fraenkel set theory (ZF) implemented in the Isabelle theorem prover. A small subset of Ruby, called Pure Ruby, is embedded as a conservative extension of ZF and characterised by an inductive definition. Many useful structures used...

  17. Princeton VLSI Project.

    Science.gov (United States)

    1983-01-01

    for otherwise, since sc = xs2 . we would ELIE system. This algorithm also applies to SL) systems have been able to compute zec without looking at block...Prof. Peter R. Cappello of the CompuLer Science Department, University of California, Santa Barbara, I I Im i Ii - 19 - Caiifo:nia. Some of the work...multiple pro- cessors will not be as simple as the MMM ones. Acknowledgments. Several useful ideas and suggestions were made by Jim Gray, Peter Honneyman

  18. The Results of Complex Selective Logging in Beech-Hornbeam Tree Stands of the Greater Caucasus in Azerbaijan

    Directory of Open Access Journals (Sweden)

    A. B. Yakhyaev

    2014-06-01

    Full Text Available The results of complex selective logging conducted in beech-hornbeam tree stands on the northeastern slope of the Greater Caucasus are analyzed in the paper. Experiments were carried out in two forestry districts, involving beech stands, comprising 2–3 units, with 30° slopes, in beech forests with woodruff, fescue and forb forest types. It has been revealed that for recovering the main tree species, as well as for increasing productivity and sustainability of the beech-hornbeam tree stands, which was spread out in the northern exposures, 2–3 repetitions of complex selective logging are recommended. It is recommended that in order to increase the amount of beech in the tree stand composition to 6–8 units in young stands and to 4–6 units at the slopes of south exposures, to complete 3–4 thinning operations, with the increasing beech share to 4–5 units in the upper story and in the undergrowth.

  19. DESIGN OF LOW EPI AND HIGH THROUGHPUT CORDIC CELL TO IMPROVE THE PERFORMANCE OF MOBILE ROBOT

    Directory of Open Access Journals (Sweden)

    P. VELRAJKUMAR

    2014-04-01

    Full Text Available This paper mainly focuses on pass logic based design, which gives an low Energy Per Instruction (EPI and high throughput COrdinate Rotation Digital Computer (CORDIC cell for application of robotic exploration. The basic components of CORDIC cell namely register, multiplexer and proposed adder is designed using pass transistor logic (PTL design. The proposed adder is implemented in bit-parallel iterative CORDIC circuit whereas designed using DSCH2 VLSI CAD tool and their layouts are generated by Microwind 3 VLSI CAD tool. The propagation delay, area and power dissipation are calculated from the simulated results for proposed adder based CORDIC cell. The EPI, throughput and effect of temperature are calculated from generated layout. The output parameter of generated layout is analysed using BSIM4 advanced analyzer. The simulated result of the proposed adder based CORDIC circuit is compared with other adder based CORDIC circuits. From the analysis of these simulated results, it was found that the proposed adder based CORDIC circuit dissipates low power, gives faster response, low EPI and high throughput.

  20. Complex Neutrosophic Subsemigroups and Ideals

    Directory of Open Access Journals (Sweden)

    Muhammad Gulistan

    2018-01-01

    Full Text Available In this article we study the idea of complex neutrosophic subsemigroups. We define the Cartesian product of complex neutrosophic subsemigroups, give some examples and study some of its related results. We also define complex neutrosophic (left, right, interior ideal in semigroup. Furthermore, we introduce the concept of characteristic function of complex neutrosophic sets, direct product of complex neutrosophic sets and study some results prove on its.

  1. Anticancer Activity of Polyoxometalate-Bisphosphonate Complexes: Synthesis, Characterization, In Vitro and In Vivo Results.

    Science.gov (United States)

    Boulmier, Amandine; Feng, Xinxin; Oms, Olivier; Mialane, Pierre; Rivière, Eric; Shin, Christopher J; Yao, Jiaqi; Kubo, Tadahiko; Furuta, Taisuke; Oldfield, Eric; Dolbecq, Anne

    2017-07-03

    We synthesized a series of polyoxometalate-bisphosphonate complexes containing Mo VI O 6 octahedra, zoledronate, or an N-alkyl (n-C 6 or n-C 8 ) zoledronate analogue, and in two cases, Mn as a heterometal. Mo 6 L 2 (L = Zol, ZolC 6 , ZolC 8 ) and Mo 4 L 2 Mn (L = Zol, ZolC 8 ) were characterized by using single-crystal X-ray crystallography and/or IR spectroscopy, elemental and energy dispersive X-ray analysis and 31 P NMR. We found promising activity against human nonsmall cell lung cancer (NCI-H460) cells with IC 50 values for growth inhibition of ∼5 μM per bisphosphonate ligand. The effects of bisphosphonate complexation on IC 50 decreased with increasing bisphosphonate chain length: C 0 ≈ 6.1×, C 6 ≈ 3.4×, and C 8 ≈ 1.1×. We then determined the activity of one of the most potent compounds in the series, Mo 4 Zol 2 Mn(III), against SK-ES-1 sarcoma cells in a mouse xenograft system finding a ∼5× decrease in tumor volume than found with the parent compound zoledronate at the same compound dosing (5 μg/mouse). Overall, the results are of interest since we show for the first time that heteropolyoxomolybdate-bisphosphonate hybrids kill tumor cells in vitro and significantly decrease tumor growth, in vivo, opening up new possibilities for targeting both Ras as well as epidermal growth factor receptor driven cancers.

  2. Highly Parallel Computing Architectures by using Arrays of Quantum-dot Cellular Automata (QCA): Opportunities, Challenges, and Recent Results

    Science.gov (United States)

    Fijany, Amir; Toomarian, Benny N.

    2000-01-01

    There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA

  3. Impact of Cognitive Abilities and Prior Knowledge on Complex Problem Solving Performance – Empirical Results and a Plea for Ecologically Valid Microworlds

    Directory of Open Access Journals (Sweden)

    Heinz-Martin Süß

    2018-05-01

    Full Text Available The original aim of complex problem solving (CPS research was to bring the cognitive demands of complex real-life problems into the lab in order to investigate problem solving behavior and performance under controlled conditions. Up until now, the validity of psychometric intelligence constructs has been scrutinized with regard to its importance for CPS performance. At the same time, different CPS measurement approaches competing for the title of the best way to assess CPS have been developed. In the first part of the paper, we investigate the predictability of CPS performance on the basis of the Berlin Intelligence Structure Model and Cattell’s investment theory as well as an elaborated knowledge taxonomy. In the first study, 137 students managed a simulated shirt factory (Tailorshop; i.e., a complex real life-oriented system twice, while in the second study, 152 students completed a forestry scenario (FSYS; i.e., a complex artificial world system. The results indicate that reasoning – specifically numerical reasoning (Studies 1 and 2 and figural reasoning (Study 2 – are the only relevant predictors among the intelligence constructs. We discuss the results with reference to the Brunswik symmetry principle. Path models suggest that reasoning and prior knowledge influence problem solving performance in the Tailorshop scenario mainly indirectly. In addition, different types of system-specific knowledge independently contribute to predicting CPS performance. The results of Study 2 indicate that working memory capacity, assessed as an additional predictor, has no incremental validity beyond reasoning. We conclude that (1 cognitive abilities and prior knowledge are substantial predictors of CPS performance, and (2 in contrast to former and recent interpretations, there is insufficient evidence to consider CPS a unique ability construct. In the second part of the paper, we discuss our results in light of recent CPS research, which predominantly

  4. Impact of Cognitive Abilities and Prior Knowledge on Complex Problem Solving Performance – Empirical Results and a Plea for Ecologically Valid Microworlds

    Science.gov (United States)

    Süß, Heinz-Martin; Kretzschmar, André

    2018-01-01

    The original aim of complex problem solving (CPS) research was to bring the cognitive demands of complex real-life problems into the lab in order to investigate problem solving behavior and performance under controlled conditions. Up until now, the validity of psychometric intelligence constructs has been scrutinized with regard to its importance for CPS performance. At the same time, different CPS measurement approaches competing for the title of the best way to assess CPS have been developed. In the first part of the paper, we investigate the predictability of CPS performance on the basis of the Berlin Intelligence Structure Model and Cattell’s investment theory as well as an elaborated knowledge taxonomy. In the first study, 137 students managed a simulated shirt factory (Tailorshop; i.e., a complex real life-oriented system) twice, while in the second study, 152 students completed a forestry scenario (FSYS; i.e., a complex artificial world system). The results indicate that reasoning – specifically numerical reasoning (Studies 1 and 2) and figural reasoning (Study 2) – are the only relevant predictors among the intelligence constructs. We discuss the results with reference to the Brunswik symmetry principle. Path models suggest that reasoning and prior knowledge influence problem solving performance in the Tailorshop scenario mainly indirectly. In addition, different types of system-specific knowledge independently contribute to predicting CPS performance. The results of Study 2 indicate that working memory capacity, assessed as an additional predictor, has no incremental validity beyond reasoning. We conclude that (1) cognitive abilities and prior knowledge are substantial predictors of CPS performance, and (2) in contrast to former and recent interpretations, there is insufficient evidence to consider CPS a unique ability construct. In the second part of the paper, we discuss our results in light of recent CPS research, which predominantly utilizes the

  5. Systolic trees and systolic language recognition by tree automata

    Energy Technology Data Exchange (ETDEWEB)

    Steinby, M

    1983-01-01

    K. Culik II, J. Gruska, A. Salomaa and D. Wood have studied the language recognition capabilities of certain types of systolically operating networks of processors (see research reports Cs-81-32, Cs-81-36 and Cs-82-01, Univ. of Waterloo, Ontario, Canada). In this paper, their model for systolic VLSI trees is formalised in terms of standard tree automaton theory, and the way in which some known facts about recognisable forests and tree transductions can be applied in VLSI tree theory is demonstrated. 13 references.

  6. Complex analysis and geometry

    CERN Document Server

    Silva, Alessandro

    1993-01-01

    The papers in this wide-ranging collection report on the results of investigations from a number of linked disciplines, including complex algebraic geometry, complex analytic geometry of manifolds and spaces, and complex differential geometry.

  7. On Measuring the Complexity of Networks: Kolmogorov Complexity versus Entropy

    Directory of Open Access Journals (Sweden)

    Mikołaj Morzy

    2017-01-01

    Full Text Available One of the most popular methods of estimating the complexity of networks is to measure the entropy of network invariants, such as adjacency matrices or degree sequences. Unfortunately, entropy and all entropy-based information-theoretic measures have several vulnerabilities. These measures neither are independent of a particular representation of the network nor can capture the properties of the generative process, which produces the network. Instead, we advocate the use of the algorithmic entropy as the basis for complexity definition for networks. Algorithmic entropy (also known as Kolmogorov complexity or K-complexity for short evaluates the complexity of the description required for a lossless recreation of the network. This measure is not affected by a particular choice of network features and it does not depend on the method of network representation. We perform experiments on Shannon entropy and K-complexity for gradually evolving networks. The results of these experiments point to K-complexity as the more robust and reliable measure of network complexity. The original contribution of the paper includes the introduction of several new entropy-deceiving networks and the empirical comparison of entropy and K-complexity as fundamental quantities for constructing complexity measures for networks.

  8. Structural plasticity: how intermetallics deform themselves in response to chemical pressure, and the complex structures that result.

    Science.gov (United States)

    Berns, Veronica M; Fredrickson, Daniel C

    2014-10-06

    Interfaces between periodic domains play a crucial role in the properties of metallic materials, as is vividly illustrated by the way in which the familiar malleability of many metals arises from the formation and migration of dislocations. In complex intermetallics, such interfaces can occur as an integral part of the ground-state crystal structure, rather than as defects, resulting in such marvels as the NaCd2 structure (whose giant cubic unit cell contains more than 1000 atoms). However, the sources of the periodic interfaces in intermetallics remain mysterious, unlike the dislocations in simple metals, which can be associated with the exertion of physical stresses. In this Article, we propose and explore the concept of structural plasticity, the hypothesis that interfaces in complex intermetallic structures similarly result from stresses, but ones that are inherent in a defect-free parent structure, rather than being externally applied. Using DFT-chemical pressure analysis, we show how the complex structures of Ca2Ag7 (Yb2Ag7 type), Ca14Cd51 (Gd14Ag51 type), and the 1/1 Tsai-type quasicrystal approximant CaCd6 (YCd6 type) can all be traced to large negative pressures around the Ca atoms of a common progenitor structure, the CaCu5 type with its simple hexagonal 6-atom unit cell. Two structural paths are found by which the compounds provide relief to the Ca atoms' negative pressures: a Ca-rich pathway, where lower coordination numbers are achieved through defects eliminating transition metal (TM) atoms from the structure; and a TM-rich path, along which the addition of spacer Cd atoms provides the Ca coordination environments greater independence from each other as they contract. The common origins of these structures in the presence of stresses within a single parent structure highlights the diverse paths by which intermetallics can cope with competing interactions, and the role that structural plasticity may play in navigating this diversity.

  9. Content-addressable read/write memories for image analysis

    Science.gov (United States)

    Snyder, W. E.; Savage, C. D.

    1982-01-01

    The commonly encountered image analysis problems of region labeling and clustering are found to be cases of search-and-rename problem which can be solved in parallel by a system architecture that is inherently suitable for VLSI implementation. This architecture is a novel form of content-addressable memory (CAM) which provides parallel search and update functions, allowing speed reductions down to constant time per operation. It has been proposed in related investigations by Hall (1981) that, with VLSI, CAM-based structures with enhanced instruction sets for general purpose processing will be feasible.

  10. Entropy coders of the H.264/AVC standard

    CERN Document Server

    Tian, Xiaohua; Lian, Yong

    2010-01-01

    This book presents a collection of algorithms and VLSI architectures of entropy (or statistical) codecs of recent video compression standards, with focus on the H.264/AVC standard. For any visual data compression scheme, there exists a combination of two, or all of the following three stages: spatial, temporal, and statistical compression. General readers are first introduced with the various algorithms of the statistical coders. The VLSI implementations are also reviewed and discussed. Readers with limited hardware design background are also introduced with a design methodology starting from

  11. Cobalt(III) complex

    Indian Academy of Sciences (India)

    Administrator

    e, 40 µM complex, 10 hrs after dissolution, f, 40 µM complex, after irradiation dose 15 Gy. and H-atoms result in reduction of Co(III) to Co. (II). 6. It is interesting to see in complex containing multiple ligands what is the fate of electron adduct species formed by electron addition. Reduction to. Co(II) and intramolecular transfer ...

  12. General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 12 (2003), s. 2727-2778 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : computational power * computational complexity * perceptrons * radial basis functions * spiking neurons * feedforward networks * reccurent networks * probabilistic computation * analog computation Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  13. Results from 10 Years of a CBT Pain Self-Management Outpatient Program for Complex Chronic Conditions

    Directory of Open Access Journals (Sweden)

    Kathryn A. Boschen

    2016-01-01

    Full Text Available Background. Traditional unimodal interventions may be insufficient for treating complex pain, as they do not address cognitive and behavioural contributors to pain. Cognitive Behavioural Therapy (CBT and physical exercise (PE are empirically supported treatments that can reduce pain and improve quality of life. Objectives. To examine the outcomes of a pain self-management outpatient program based on CBT and PE at a rehabilitation hospital in Toronto, Ontario. Methods. The pain management group (PMG consisted of 20 sessions over 10 weeks. The intervention consisted of four components: education, cognitive behavioural skills, exercise, and self-management strategies. Outcome measures included the sensory, affective, and intensity of pain experience, depression, anxiety, pain disability, active and passive coping style, and general health functioning. Results. From 2002 to 2011, 36 PMGs were run. In total, 311 patients entered the program and 214 completed it. Paired t-tests showed significant pre- to posttreatment improvements in all outcomes measured. Patient outcomes did not differ according to the number or type of diagnoses. Both before and after treatment, women reported more active coping than men. Discussion. The PMGs improved pain self-management for patients with complex pain. Future research should use a randomized controlled design to better understand the outcomes of PMGs.

  14. Ruthenium-bipyridine complexes bearing fullerene or carbon nanotubes: synthesis and impact of different carbon-based ligands on the resulting products.

    Science.gov (United States)

    Wu, Zhen-yi; Huang, Rong-bin; Xie, Su-yuan; Zheng, Lan-sun

    2011-09-07

    This paper discusses the synthesis of two carbon-based pyridine ligands of fullerene pyrrolidine pyridine (C(60)-py) and multi-walled carbon nanotube pyrrolidine pyridine (MWCNT-py) via 1,3-dipolar cycloaddition. The two complexes, C(60)-Ru and MWCNT-Ru, were synthesized by ligand substitution in the presence of NH(4)PF(6), and Ru(II)(bpy)(2)Cl(2) was used as a reaction precursor. Both complexes were characterized by mass spectroscopy (MS), elemental analysis, nuclear magnetic resonance (NMR) spectroscopy, infrared spectroscopy (IR), ultraviolet/visible spectroscopy (UV-VIS) spectrometry, Raman spectroscopy, X-ray photoelectron spectroscopy (XPS), thermogravimetric analysis (TGA), and cyclic voltammetry (CV). The results showed that the substitution way of C(60)-py is different from that of MWCNT-py. The C(60)-py and a NH(3) replaced a Cl(-) and a bipyridine in Ru(II)(bpy)(2)Cl(2) to produce a five-coordinate complex of [Ru(bpy)(NH(3))(C(60)-py)Cl]PF(6), whereas MWCNT-py replaced a Cl(-) to generate a six-coordinate complex of [Ru(bpy)(2)(MWCNT-py)Cl]PF(6). The cyclic voltammetry study showed that the electron-withdrawing ability was different for C(60) and MWCNT. The C(60) showed a relatively stronger electron-withdrawing effect with respect to MWCNT. This journal is © The Royal Society of Chemistry 2011

  15. Rapid Industrial Prototyping and SoC Design of 3G/4G Wireless Systems Using an HLS Methodology

    Directory of Open Access Journals (Sweden)

    Andres Takach

    2006-07-01

    Full Text Available Many very-high-complexity signal processing algorithms are required in future wireless systems, giving tremendous challenges to real-time implementations. In this paper, we present our industrial rapid prototyping experiences on 3G/4G wireless systems using advanced signal processing algorithms in MIMO-CDMA and MIMO-OFDM systems. Core system design issues are studied and advanced receiver algorithms suitable for implementation are proposed for synchronization, MIMO equalization, and detection. We then present VLSI-oriented complexity reduction schemes and demonstrate how to interact these high-complexity algorithms with an HLS-based methodology for extensive design space exploration. This is achieved by abstracting the main effort from hardware iterations to the algorithmic C/C++ fixed-point design. We also analyze the advantages and limitations of the methodology. Our industrial design experience demonstrates that it is possible to enable an extensive architectural analysis in a short-time frame using HLS methodology, which significantly shortens the time to market for wireless systems.

  16. Rapid Industrial Prototyping and SoC Design of 3G/4G Wireless Systems Using an HLS Methodology

    Directory of Open Access Journals (Sweden)

    Cavallaro JosephR

    2006-01-01

    Full Text Available Many very-high-complexity signal processing algorithms are required in future wireless systems, giving tremendous challenges to real-time implementations. In this paper, we present our industrial rapid prototyping experiences on 3G/4G wireless systems using advanced signal processing algorithms in MIMO-CDMA and MIMO-OFDM systems. Core system design issues are studied and advanced receiver algorithms suitable for implementation are proposed for synchronization, MIMO equalization, and detection. We then present VLSI-oriented complexity reduction schemes and demonstrate how to interact these high-complexity algorithms with an HLS-based methodology for extensive design space exploration. This is achieved by abstracting the main effort from hardware iterations to the algorithmic C/C++ fixed-point design. We also analyze the advantages and limitations of the methodology. Our industrial design experience demonstrates that it is possible to enable an extensive architectural analysis in a short-time frame using HLS methodology, which significantly shortens the time to market for wireless systems.

  17. Complex saddle points and the sign problem in complex Langevin simulation

    International Nuclear Information System (INIS)

    Hayata, Tomoya; Hidaka, Yoshimasa; Tanizaki, Yuya

    2016-01-01

    We show that complex Langevin simulation converges to a wrong result within the semiclassical analysis, by relating it to the Lefschetz-thimble path integral, when the path-integral weight has different phases among dominant complex saddle points. Equilibrium solution of the complex Langevin equation forms local distributions around complex saddle points. Its ensemble average approximately becomes a direct sum of the average in each local distribution, where relative phases among them are dropped. We propose that by taking these phases into account through reweighting, we can solve the wrong convergence problem. However, this prescription may lead to a recurrence of the sign problem in the complex Langevin method for quantum many-body systems.

  18. Complexity in practice: understanding primary care as a complex adaptive system

    Directory of Open Access Journals (Sweden)

    Beverley Ellis

    2010-06-01

    Conclusions The results are real-world exemplars of the emergent properties of complex adaptive systems. Improving clinical governance in primary care requires both complex social interactions and underpinning informatics. The socio-technical lessons learned from this research should inform future management approaches.

  19. VLSI systems energy management from a software perspective – A literature survey

    Directory of Open Access Journals (Sweden)

    Prasada Kumari K.S.

    2016-09-01

    Full Text Available The increasing demand for ultra-low power electronic systems has motivated research in device technology and hardware design techniques. Experimental studies have proved that the hardware innovations for power reduction are fully exploited only with the proper design of upper layer software. Also, the software power and energy modelling and analysis – the first step towards energy reduction is complex due to the inter and intra dependencies of processors, operating systems, application software, programming languages and compilers. The subject is too vast; the paper aims to give a consolidated view to researchers in arriving at solutions to power optimization problems from a software perspective. The review emphasizes the fact that software design and implementation is to be viewed from system energy conservation angle rather than as an isolated process. After covering a global view of end to end software based power reduction techniques for micro sensor nodes to High Performance Computing systems, specific design aspects related to battery powered Embedded computing for mobile and portable systems are addressed in detail. The findings are consolidated into 2 major categories – those related to research directions and those related to existing industry practices. The emerging concept of Green Software with specific focus on mainframe computing is also discussed in brief. Empirical results on power saving are included wherever available. The paper concludes that only with the close co-ordination between hardware architect, software architect and system architect low energy systems can be realized.

  20. Laboratory results of stress corrosion cracking of steam generator tubes in a complex environment - An update

    Energy Technology Data Exchange (ETDEWEB)

    Horner, Olivier; Pavageau, Ellen-Mary; Vaillant, Francois [EDF R and D, Materials and Mechanics of Components Department, 77818 Moret-sur-Loing (France); Bouvier, Odile de [EDF Nuclear Engineering Division, Centre d' Expertise et d' Inspection dans les Domaines de la Realisation et de l' Exploitation, 93206 Saint Denis (France)

    2004-07-01

    Stress corrosion cracking occurs in the flow-restricted areas on the secondary side of steam generator tubes of Pressured Water Reactors (PWR), where water pollutants are likely to concentrate. Chemical analyses carried out during the shutdowns gave some insight into the chemical composition of these areas, which has evolved during these last years (i.e. less sodium as pollutants). It has been modeled in laboratory by tests in two different typical environments: the sodium hydroxide and the sulfate environments. These models satisfactorily describe the secondary side corrosion of steam generator tubes for old plant units. Furthermore, a third typical environment - the complex environment - which corresponds to an All Volatile Treatment (AVT) environment containing alumina, silica, phosphate and acetic acid has been recently studied. This particular environment satisfactorily reproduces the composition of the deposits observed on the surface of the steam generator tubes as well as the degradation of the tubes. A review of the recent laboratory results obtained by considering the complex environment are presented here. Several tests have been carried out in order to study initiation and propagation of secondary side corrosion cracking for some selected materials in such an environment. 600 Thermally Treated (TT) alloy reveals to be less sensitive to secondary side corrosion cracking than 600 Mill Annealed (MA) alloy. Finally, the influence of some related factors like stress, temperature and environmental factors are discussed. (authors)

  1. Laboratory results of stress corrosion cracking of steam generator tubes in a complex environment - An update

    International Nuclear Information System (INIS)

    Horner, Olivier; Pavageau, Ellen-Mary; Vaillant, Francois; Bouvier, Odile de

    2004-01-01

    Stress corrosion cracking occurs in the flow-restricted areas on the secondary side of steam generator tubes of Pressured Water Reactors (PWR), where water pollutants are likely to concentrate. Chemical analyses carried out during the shutdowns gave some insight into the chemical composition of these areas, which has evolved during these last years (i.e. less sodium as pollutants). It has been modeled in laboratory by tests in two different typical environments: the sodium hydroxide and the sulfate environments. These models satisfactorily describe the secondary side corrosion of steam generator tubes for old plant units. Furthermore, a third typical environment - the complex environment - which corresponds to an All Volatile Treatment (AVT) environment containing alumina, silica, phosphate and acetic acid has been recently studied. This particular environment satisfactorily reproduces the composition of the deposits observed on the surface of the steam generator tubes as well as the degradation of the tubes. A review of the recent laboratory results obtained by considering the complex environment are presented here. Several tests have been carried out in order to study initiation and propagation of secondary side corrosion cracking for some selected materials in such an environment. 600 Thermally Treated (TT) alloy reveals to be less sensitive to secondary side corrosion cracking than 600 Mill Annealed (MA) alloy. Finally, the influence of some related factors like stress, temperature and environmental factors are discussed. (authors)

  2. Flip-flop design in nanometer CMOS from high speed to low energy

    CERN Document Server

    Alioto, Massimo; Palumbo, Gaetano

    2015-01-01

    This book provides a unified treatment of Flip-Flop design and selection in nanometer CMOS VLSI systems. The design aspects related to the energy-delay tradeoff in Flip-Flops are discussed, including their energy-optimal selection according to the targeted application, and the detailed circuit design in nanometer CMOS VLSI systems. Design strategies are derived in a coherent framework that includes explicitly nanometer effects, including leakage, layout parasitics and process/voltage/temperature variations, as main advances over the existing body of work in the field. The related design tradeoffs are explored in a wide range of applications and the related energy-performance targets. A wide range of existing and recently proposed Flip-Flop topologies are discussed. Theoretical foundations are provided to set the stage for the derivation of design guidelines, and emphasis is given on practical aspects and consequences of the presented results. Analytical models and derivations are introduced when needed to gai...

  3. Bio-Inspired Microsystem for Robust Genetic Assay Recognition

    Directory of Open Access Journals (Sweden)

    Jaw-Chyng Lue

    2008-01-01

    Full Text Available A compact integrated system-on-chip (SoC architecture solution for robust, real-time, and on-site genetic analysis has been proposed. This microsystem solution is noise-tolerable and suitable for analyzing the weak fluorescence patterns from a PCR prepared dual-labeled DNA microchip assay. In the architecture, a preceding VLSI differential logarithm microchip is designed for effectively computing the logarithm of the normalized input fluorescence signals. A posterior VLSI artificial neural network (ANN processor chip is used for analyzing the processed signals from the differential logarithm stage. A single-channel logarithmic circuit was fabricated and characterized. A prototype ANN chip with unsupervised winner-take-all (WTA function was designed, fabricated, and tested. An ANN learning algorithm using a novel sigmoid-logarithmic transfer function based on the supervised backpropagation (BP algorithm is proposed for robustly recognizing low-intensity patterns. Our results show that the trained new ANN can recognize low-fluorescence patterns better than an ANN using the conventional sigmoid function.

  4. Adaptive control for a class of nonlinear complex dynamical systems with uncertain complex parameters and perturbations.

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic systems (CVCSs in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.

  5. Adaptive control for a class of nonlinear complex dynamical systems with uncertain complex parameters and perturbations.

    Science.gov (United States)

    Liu, Jian; Liu, Kexin; Liu, Shutang

    2017-01-01

    In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.

  6. GaAs integrated circuits and heterojunction devices

    Science.gov (United States)

    Fowlis, Colin

    1986-06-01

    The state of the art of GaAs technology in the U.S. as it applies to digital and analog integrated circuits is examined. In a market projection, it is noted that whereas analog ICs now largely dominate the market, in 1994 they will amount to only 39 percent vs. 57 percent for digital ICs. The military segment of the market will remain the largest (42 percent in 1994 vs. 70 percent today). ICs using depletion-mode-only FETs can be constructed in various forms, the closest to production being BFL or buffered FET logic. Schottky diode FET logic - a lower power approach - can reach higher complexities and strong efforts are being made in this direction. Enhancement type devices appear essential to reach LSI and VLSI complexity, but process control is still very difficult; strong efforts are under way, both in the U.S. and in Japan. Heterojunction devices appear very promising, although structures are fairly complex, and special fabrication techniques, such as molecular beam epitaxy and MOCVD, are necessary. High-electron-mobility-transistor (HEMT) devices show significant performance advantages over MESFETs at low temperatures. Initial results of heterojunction bipolar transistor devices show promise for high speed A/D converter applications.

  7. Promising results after single-stage reconstruction of the nipple and areola complex

    DEFF Research Database (Denmark)

    Børsen-Koch, Mikkel; Bille, Camilla; Thomsen, Jørn B

    2013-01-01

    Introduction: Reconstruction of the nipple-areola complex (NAC) traditionally marks the end of breast reconstruction. Several different surgical techniques have been described, but most are staged procedures. This paper describes a simple single-stage approach. Material and Methods: We used...... reconstruction was 43 min. (30-50 min.). Conclusion: This simple single-stage NAC reconstruction seems beneficial for both patient and surgeon as it seems to be associated with faster reconstruction and reduced procedure-related time without compromising the aesthetic outcome or the morbidity associated...

  8. Complexity factors and prediction of performance

    International Nuclear Information System (INIS)

    Braarud, Per Oeyvind

    1998-03-01

    Understanding of what makes a control room situation difficult to handle is important when studying operator performance, both with respect to prediction as well as improvement of the human performance. A factor analytic approach identified eight factors from operators' answers to an 39 item questionnaire about complexity of the operator's task in the control room. A Complexity Profiling Questionnaire was developed, based on the factor analytic results from the operators' conception of complexity. The validity of the identified complexity factors was studied by prediction of crew performance and prediction of plant performance from ratings of the complexity of scenarios. The scenarios were rated by both process experts and the operators participating in the scenarios, using the Complexity Profiling Questionnaire. The process experts' complexity ratings predicted both crew performance and plant performance, while the operators' rating predicted plant performance only. The results reported are from initial studies of complexity, and imply a promising potential for further studies of the concept. The approach used in the study as well as the reported results are discussed. A chapter about the structure of the conception of complexity, and a chapter about further research conclude the report. (author)

  9. Complex plume dynamics in the transition zone underneath the Hawaii hotspot: seismic imaging results

    Science.gov (United States)

    Cao, Q.; van der Hilst, R. D.; de Hoop, M. V.; Shim, S.

    2010-12-01

    In recent years, progress has been made in seismology to constrain the depth variations of the transition zone discontinuities, e.g. 410 km and 660 km discontinuities, which can be used to constrain the local temperature and chemistry profiles, and hence to infer the existences and morphology of mantle plumes. Taking advantage of the abundance of natural earthquake sources in western Pacific subduction zones and the many seismograph stations in the Americas, we used a generalized Radon transform (GRT), a high resolution inverse-scattering technique, of SS precursors to form 3-D images of the transition zone structures of a 30 degree by 40 degree area underneath Hawaii and the Hawaii-Emperor seamount chain. Rather than a simple mushroom-shape plume, our seismic images suggest complex plume dynamics interacting with the transition zone phase transitions, especially at the 660’ discontinuity. A conspicuous uplift of the 660 discontinuity in a region of 800km in diameter is observed to the west of Hawaii. No correspondent localized depression of the 410 discontinuity is found. This lack of correlation between and differences in lateral length scale of the topographies of the 410 and 660 km discontinuities are consistent with many geodynamical modeling results, in which a deep-mantle plume impinging on the transition zone, creating a pond of hot material underneath endothermic phase change at 660 km depth, and with secondary plumes connecting to the present-day hotspot at Earth’s surface. This more complex plume dynamics suggests that the complicated mass transport process across the transition zone should be taken into account when we try to link the geochemical observations of Hawaiian basalt geochemistry at the Earth’s surface to deep mantle domains. In addition to clear signals at 410km, 520km and 660km depth, the data also reveals rich structures near 350km depth and between 800 - 1000km depth, which may be regional, laterally intermittent scatter interfaces

  10. The Results of Complex Research of GSS "SBIRS-Geo 2" Behavior in the Orbit

    Science.gov (United States)

    Sukhov, P. P.; Epishev, V. P.; Sukhov, K. P.; Karpenko, G. F.; Motrunich, I. I.

    2017-04-01

    The new generation of geosynchronous satellites SBIRS of US Air Force early warning system series (Satellite Early Warning System) replaced the previous DSP-satellite series (Defense Support Program). Currently from the territory of Ukraine, several GSS of DSP series and one "SBIRS-Geo 2" are available to observation. During two years of observations, we have received and analyzed for two satellites more than 30 light curves in B, V, R photometric system. As a result of complex research, we propose a model of "SBIRS-Geo" 2 orbital behavior compared with the same one of the DSP-satellite. To control the entire surface of the Earth with 15-16 sec interval, including the polar regions, 4 SBIRS satellites located every 90 deg. along the equator are enough in GEO orbit. Since DSP-satellites provide the coverage of the Earth's surface to 83 deg. latitudes with a period of 50 sec, DSP-satellites should be 8. All the conclusions were made based on an analysis of photometric and coordinate observations using the simulation of the dynamics of their orbital behavior.

  11. Trauma to the nail complex

    Directory of Open Access Journals (Sweden)

    Jefferson Braga Silva

    2014-04-01

    Full Text Available OBJECTIVE: to analyze the results from surgical intervention to treat trauma of the nail complex.METHODS: we retrospectively reviewed a series of 94 consecutive patients with trauma of the nail complex who were treated between 2000 and 2009. In 42 patients, nail bed suturing was performed. In 27 patients, nail bed suturing was performed subsequent to osteosynthesis of the distal phalanx. In 15, immediate grafting was performed, and in 10, late-stage grafting of the nail bed. The growth, size and shape of the nail were evaluated in comparison with the contralateral finger. The results were obtained by summing scores and classifying them as good, fair or poor.RESULTS: the results were considered to be good particularly in the patients who underwent nail bed suturing or nail bed suturing with osteosynthesis of the distal phalanx. Patients who underwent immediate or late-stage nail grafting had poor results.CONCLUSION: trauma of the nail complex without loss of substance presented better results than did deferred treatment for reconstruction of the nail complex.

  12. Complexation of buffer constituents with neutral complexation agents: part II. Practical impact in capillary zone electrophoresis.

    Science.gov (United States)

    Beneš, Martin; Riesová, Martina; Svobodová, Jana; Tesařová, Eva; Dubský, Pavel; Gaš, Bohuslav

    2013-09-17

    This article elucidates the practical impact of the complexation of buffer constituents with complexation agents on electrophoretic results, namely, complexation constant determination, system peak development, and proper separation of analytes. Several common buffers, which were selected based on the pH study in Part I of this paper series (Riesová, M.; Svobodová, J.; Tošner, Z.; Beneš, M.; Tesařová, E.; Gaš, B. Anal. Chem., 2013, DOI: 10.1021/ac4013804); e.g., CHES, MES, MOPS, Tricine were used to demonstrate behavior of such complex separation systems. We show that the value of a complexation constant determined in the interacting buffers environment depends not only on the analyte and complexation agent but it is also substantially affected by the type and concentration of buffer constituents. As a result, the complexation parameters determined in the interacting buffers cannot be regarded as thermodynamic ones and may provide misleading information about the strength of complexation of the compound of interest. We also demonstrate that the development of system peaks in interacting buffer systems significantly differs from the behavior known for noncomplexing systems, as the mobility of system peaks depends on the concentration and type of neutral complexation agent. Finally, we show that the use of interacting buffers can totally ruin the results of electrophoretic separation because the buffer properties change as the consequence of the buffer constituents' complexation. As a general conclusion, the interaction of buffer constituents with the complexation agent should always be considered in any method development procedures.

  13. Innovation in a complex environment

    Directory of Open Access Journals (Sweden)

    René Pellissier

    2012-11-01

    Objectives: The study objectives were, firstly, to establish the determinants for complexity and how these can be addressed from a design point of view in order to ensure innovation success and, secondly, to determine how this changes innovation forms and applications. Method: Two approaches were offered to deal with a complex environment – one allowing for complexity for organisational innovation and the other introducing reductionism to minimise complexity. These approaches were examined in a qualitative study involving case studies, open-ended interviews and content analysis between seven developing economy (South African organisations and seven developed economy (US organisations. Results: This study presented a proposed framework for (organisational innovation in a complex environment versus a framework that minimises complexity. The comparative organisational analysis demonstrated the importance of initiating organisational innovation to address internal and external complexity, with the focus being on the leadership actions, their selected operating models and resultant organisational innovations designs, rather than on technological innovations. Conclusion: This study cautioned the preference for technological innovation within organisations and suggested alternative innovation forms (such as organisational and management innovation be used to remain competitive in a complex environment.

  14. Detection and isolation of cell-derived microparticles are compromised by protein complexes resulting from shared biophysical parameters.

    Science.gov (United States)

    György, Bence; Módos, Károly; Pállinger, Eva; Pálóczi, Krisztina; Pásztói, Mária; Misják, Petra; Deli, Mária A; Sipos, Aron; Szalai, Anikó; Voszka, István; Polgár, Anna; Tóth, Kálmán; Csete, Mária; Nagy, György; Gay, Steffen; Falus, András; Kittel, Agnes; Buzás, Edit I

    2011-01-27

    Numerous diseases, recently reported to associate with elevated microvesicle/microparticle (MP) counts, have also long been known to be characterized by accelerated immune complex (IC) formation. The goal of this study was to investigate the potential overlap between parameters of protein complexes (eg, ICs or avidin-biotin complexes) and MPs, which might perturb detection and/or isolation of MPs. In this work, after comprehensive characterization of MPs by electron microscopy, atomic force microscopy, dynamic light-scattering analysis, and flow cytometry, for the first time, we drive attention to the fact that protein complexes, especially insoluble ICs, overlap in biophysical properties (size, light scattering, and sedimentation) with MPs. This, in turn, affects MP quantification by flow cytometry and purification by differential centrifugation, especially in diseases in which IC formation is common, including not only autoimmune diseases, but also hematologic disorders, infections, and cancer. These data may necessitate reevaluation of certain published data on patient-derived MPs and contribute to correct the clinical laboratory assessment of the presence and biologic functions of MPs in health and disease.

  15. A Low Cost Matching Motion Estimation Sensor Based on the NIOS II Microprocessor

    Directory of Open Access Journals (Sweden)

    Diego González

    2012-09-01

    Full Text Available This work presents the implementation of a matching-based motion estimation sensor on a Field Programmable Gate Array (FPGA and NIOS II microprocessor applying a C to Hardware (C2H acceleration paradigm. The design, which involves several matching algorithms, is mapped using Very Large Scale Integration (VLSI technology. These algorithms, as well as the hardware implementation, are presented here together with an extensive analysis of the resources needed and the throughput obtained. The developed low-cost system is practical for real-time throughput and reduced power consumption and is useful in robotic applications, such as tracking, navigation using an unmanned vehicle, or as part of a more complex system.

  16. Signal Processing in Medical Ultrasound B-mode Imaging

    International Nuclear Information System (INIS)

    Song, Tai Kyong

    2000-01-01

    Ultrasonic imaging is the most widely used modality among modern imaging device for medical diagnosis and the system performance has been improved dramatically since early 90's due to the rapid advances in DSP performance and VLSI technology that made it possible to employ more sophisticated algorithms. This paper describes 'main stream' digital signal processing functions along with the associated implementation considerations in modern medical ultrasound imaging systems. Topics covered include signal processing methods for resolution improvement, ultrasound imaging system architectures, roles and necessity of the applications of DSP and VLSI technology in the development of the medical ultrasound imaging systems, and array signal processing techniques for ultrasound focusing

  17. Applications of the scalable coherent interface to data acquisition at LHC

    CERN Document Server

    Bogaerts, A; Divià, R; Müller, H; Parkman, C; Ponting, P J; Skaali, B; Midttun, G; Wormald, D; Wikne, J; Falciano, S; Cesaroni, F; Vinogradov, V I; Kristiansen, E H; Solberg, B; Guglielmi, A M; Worm, F H; Bovier, J; Davis, C; CERN. Geneva. Detector Research and Development Committee

    1991-01-01

    We propose to use the Scalable Coherent Interface (SCI) as a very high speed interconnect between LHC detector data buffers and farms of commercial trigger processors. Both the global second and third level trigger can be based on SCI as a reconfigurable and scalable system. SCI is a proposed IEEE standard which uses fast point-to-point links to provide computer-bus like services. It can connect a maximum of 65 536 nodes (memories or processors), providing data transfer rates of up to 1 Gbyte/s. Scalable data acquisition systems can be built using either simple SCI rings or complex switches. The interconnections may be flat cables, coaxial cables, or optical fibres. SCI protocols have been entirely implemented in VLSI, resulting in a significant simplification of data acquisition software. Novel SCI features allow efficient implementation of both data and processor driven readout architectures. In particular, a very efficient implementation of the third level trigger can be achieved by combining SCI's shared ...

  18. COMPLEX TRAINING: A BRIEF REVIEW

    Directory of Open Access Journals (Sweden)

    William P. Ebben

    2002-06-01

    Full Text Available The effectiveness of plyometric training is well supported by research. Complex training has gained popularity as a training strategy combining weight training and plyometric training. Anecdotal reports recommend training in this fashion in order to improve muscular power and athletic performance. Recently, several studies have examined complex training. Despite the fact that questions remain about the potential effectiveness and implementation of this type of training, results of recent studies are useful in guiding practitioners in the development and implementation of complex training programs. In some cases, research suggests that complex training has an acute ergogenic effect on upper body power and the results of acute and chronic complex training include improved jumping performance. Improved performance may require three to four minutes rest between the weight training and plyometrics sets and the use of heavy weight training loads

  19. Ligation of the intersphincteric fistula tract (LIFT): a minimally invasive procedure for complex anal fistula: two-year results of a prospective multicentric study.

    Science.gov (United States)

    Sileri, Pierpaolo; Giarratano, Gabriella; Franceschilli, Luana; Limura, Elsa; Perrone, Federico; Stazi, Alessandro; Toscana, Claudio; Gaspari, Achille Lucio

    2014-10-01

    The surgical management of anal fistulas is still a matter of discussion and no clear recommendations exist. The present study analyses the results of the ligation of the intersphincteric fistula tract (LIFT) technique in treating complex anal fistulas, in particular healing, fecal continence, and recurrence. Between October 2010 and February 2012, a total of 26 consecutive patients underwent LIFT. All patients had a primary complex anal fistula and preoperatively all underwent clinical examination, proctoscopy, transanal ultrasonography/magnetic resonance imaging, and were treated with the LIFT procedure. For the purpose of this study, fistulas were classified as complex if any of the following conditions were present: tract crossing more than 30% of the external sphincter, anterior fistula in a woman, recurrent fistula, or preexisting incontinence. Patient's postoperative complications, healing time, recurrence rate, and postoperative continence were recorded during follow-up. The minimum follow-up was 16 months. Five patients required delayed LIFT after previous seton. There were no surgical complications. Primary healing was achieved in 19 patients (73%). Seven patients (27%) had recurrence presenting between 4 and 8 weeks postoperatively and required further surgical treatment. Two of them (29%) had previous insertion of a seton. No patients reported any incontinence postoperatively and we did not observe postoperative continence worsening. In our experience, LIFT appears easy to perform, is safe with no surgical complication, has no risk of incontinence, and has a low recurrence rate. These results suggest that LIFT as a minimally invasive technique should be routinely considered for patients affected by complex anal fistula. © The Author(s) 2013.

  20. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    Science.gov (United States)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  1. Shapes of interacting RNA complexes

    DEFF Research Database (Denmark)

    Fu, Benjamin Mingming; Reidys, Christian

    2014-01-01

    Shapes of interacting RNA complexes are studied using a filtration via their topological genus. A shape of an RNA complex is obtained by (iteratively) collapsing stacks and eliminating hairpin loops.This shape-projection preserves the topological core of the RNA complex and for fixed topological...... genus there are only finitely many such shapes. Our main result is a new bijection that relates the shapes of RNA complexes with shapes of RNA structures. This allows to compute the shape polynomial of RNA complexes via the shape polynomial of RNA structures. We furthermore present a linear time uniform...... sampling algorithm for shapes of RNA complexes of fixed topological genus....

  2. Complexity Metrics for Workflow Nets

    DEFF Research Database (Denmark)

    Lassen, Kristian Bisgaard; van der Aalst, Wil M.P.

    2009-01-01

    analysts have difficulties grasping the dynamics implied by a process model. Recent empirical studies show that people make numerous errors when modeling complex business processes, e.g., about 20 percent of the EPCs in the SAP reference model have design flaws resulting in potential deadlocks, livelocks......, etc. It seems obvious that the complexity of the model contributes to design errors and a lack of understanding. It is not easy to measure complexity, however. This paper presents three complexity metrics that have been implemented in the process analysis tool ProM. The metrics are defined...... for a subclass of Petri nets named Workflow nets, but the results can easily be applied to other languages. To demonstrate the applicability of these metrics, we have applied our approach and tool to 262 relatively complex Protos models made in the context of various student projects. This allows us to validate...

  3. Real and complex analysis

    CERN Document Server

    Apelian, Christopher; Taft, Earl; Nashed, Zuhair

    2009-01-01

    The Spaces R, Rk, and CThe Real Numbers RThe Real Spaces RkThe Complex Numbers CPoint-Set Topology Bounded SetsClassification of Points Open and Closed SetsNested Intervals and the Bolzano-Weierstrass Theorem Compactness and Connectedness Limits and Convergence Definitions and First Properties Convergence Results for SequencesTopological Results for Sequences Properties of Infinite SeriesManipulations of Series in RFunctions: Definitions and Limits DefinitionsFunctions as MappingsSome Elementary Complex FunctionsLimits of FunctionsFunctions: Continuity and Convergence Continuity Uniform Continuity Sequences and Series of FunctionsThe DerivativeThe Derivative for f: D1 → RThe Derivative for f: Dk → RThe Derivative for f: Dk → RpThe Derivative for f: D → CThe Inverse and Implicit Function TheoremsReal IntegrationThe Integral of f: [a, b] → RProperties of the Riemann Integral Further Development of Integration TheoryVector-Valued and Line IntegralsComplex IntegrationIntroduction to Complex Integrals Fu...

  4. Quantum complex rotation and uniform semiclassical calculations of complex energy eigenvalues

    International Nuclear Information System (INIS)

    Connor, J.N.L.; Smith, A.D.

    1983-01-01

    Quantum and semiclassical calculations of complex energy eigenvalues have been carried out for an exponential potential of the form V 0 r 2 exp(-r) and Lennard-Jones (12,6) potential. A straightforward method, based on the complex coordinate rotation technique, is described for the quantum calculation of complex eigenenergies. For singular potentials, the method involves an inward and outward integration of the radial Schroedinger equation, followed by matching of the logarithmic derivatives of the wave functions at an intermediate point. For regular potentials, the method is simpler, as only an inward integration is required. Attention is drawn to the World War II researches of Hartree and co-workers who anticipated later quantum mechanical work on the complex rotation method. Complex eigenenergies are also calculated from a uniform semiclassical three turning point quantization formula, which allows for the proximity of the outer pair of complex turning points. Limiting cases of this formula, which are valid for very narrow or very broad widths, are also used in the calculations. We obtain good agreement between the semiclassical and quantum results. For the Lennard-Jones (12,6) potential, we compare resonance energies and widths from the complex energy definition of a resonance with those obtained from the time delay definition

  5. SAFARI - RANDOMISED TRIAL ON COMPLEX THERAPY OF ARTERIAL HYPERTENSION AND DISLIPIDEMY. THE MAIN RESULTS

    Directory of Open Access Journals (Sweden)

    S. Y. Martsevich

    2016-01-01

    Full Text Available Aim. To evaluate possibility of complex pharmaceutical effect simultaneously on 2 risk factors – arterial hypertension (HT and hypercholesterolemia (HH in patients with high risk of cardiovascular complications (CVC.Material and methods. 101 patients with HT of 1-2 stage, HH and high risk of CVC (SCORE>5 were included in the study. Patients were randomized in 2 groups: active therapy group (ATG and control group (CG. ATG patients were actively treated for HT and HH control. The long-acting nifedipine (Nifecard XL, LEK 30 mg once daily (OD was prescribed as start antihypertensive drug. Hydrochlorothiazide 12,5 mg/day OD and bisiprolol 5 mg OD was added if antihypertensive effect was insufficient. Atorvastatin (Tulip, LEK 20-40 mg OD was prescribed for HH control. Management of CG patients was performed by doctors of out-patient clinics. The study duration was 12 weeks.Results. Systolic and diastolic blood pressure (BP levels in ATG patients were lower than these in CG patients. Target BP level was reached in 88,4% of ATG patients and only in 48,9% of CG patients. Cholesterol of low density lipoprotein (CH LPLD level was also lower in ATG patients than this in CG patients. Target CH LPLD level was reached in 37,2 % of ATG patients and in 8,3 % of CG patients. Relative risk of CVC was significantly lower in ATG patients than this in CG patients.Conclusion. SAFARI trial shows that effective pharmaceutical simultaneous control of 2 key risk factors, HT and HH, results in risk reduction of CVC.

  6. SAFARI - RANDOMISED TRIAL ON COMPLEX THERAPY OF ARTERIAL HYPERTENSION AND DISLIPIDEMY. THE MAIN RESULTS

    Directory of Open Access Journals (Sweden)

    S. Y. Martsevich

    2009-01-01

    Full Text Available Aim. To evaluate possibility of complex pharmaceutical effect simultaneously on 2 risk factors – arterial hypertension (HT and hypercholesterolemia (HH in patients with high risk of cardiovascular complications (CVC.Material and methods. 101 patients with HT of 1-2 stage, HH and high risk of CVC (SCORE>5 were included in the study. Patients were randomized in 2 groups: active therapy group (ATG and control group (CG. ATG patients were actively treated for HT and HH control. The long-acting nifedipine (Nifecard XL, LEK 30 mg once daily (OD was prescribed as start antihypertensive drug. Hydrochlorothiazide 12,5 mg/day OD and bisiprolol 5 mg OD was added if antihypertensive effect was insufficient. Atorvastatin (Tulip, LEK 20-40 mg OD was prescribed for HH control. Management of CG patients was performed by doctors of out-patient clinics. The study duration was 12 weeks.Results. Systolic and diastolic blood pressure (BP levels in ATG patients were lower than these in CG patients. Target BP level was reached in 88,4% of ATG patients and only in 48,9% of CG patients. Cholesterol of low density lipoprotein (CH LPLD level was also lower in ATG patients than this in CG patients. Target CH LPLD level was reached in 37,2 % of ATG patients and in 8,3 % of CG patients. Relative risk of CVC was significantly lower in ATG patients than this in CG patients.Conclusion. SAFARI trial shows that effective pharmaceutical simultaneous control of 2 key risk factors, HT and HH, results in risk reduction of CVC.

  7. 2012 Symposium on Chaos, Complexity and Leadership

    CERN Document Server

    Erçetin, Şefika

    2014-01-01

    These proceedings from the 2012 symposium on "Chaos, complexity and leadership"  reflect current research results from all branches of Chaos, Complex Systems and their applications in Management. Included are the diverse results in the fields of applied nonlinear methods, modeling of data and simulations, as well as theoretical achievements of Chaos and Complex Systems. Also highlighted are  Leadership and Management applications of Chaos and Complexity Theory.

  8. VLSI PARTITIONING ALGORITHM WITH ADAPTIVE CONTROL PARAMETER

    Directory of Open Access Journals (Sweden)

    P. N. Filippenko

    2013-03-01

    Full Text Available The article deals with the problem of very large-scale integration circuit partitioning. A graph is selected as a mathematical model describing integrated circuit. Modification of ant colony optimization algorithm is presented, which is used to solve graph partitioning problem. Ant colony optimization algorithm is an optimization method based on the principles of self-organization and other useful features of the ants’ behavior. The proposed search system is based on ant colony optimization algorithm with the improved method of the initial distribution and dynamic adjustment of the control search parameters. The experimental results and performance comparison show that the proposed method of very large-scale integration circuit partitioning provides the better search performance over other well known algorithms.

  9. Cooperativity of complex salt bridges

    OpenAIRE

    Gvritishvili, Anzor G.; Gribenko, Alexey V.; Makhatadze, George I.

    2008-01-01

    The energetic contribution of complex salt bridges, in which one charged residue (anchor residue) forms salt bridges with two or more residues simultaneously, has been suggested to have importance for protein stability. Detailed analysis of the net energetics of complex salt bridge formation using double- and triple-mutant cycle analysis revealed conflicting results. In two cases, it was shown that complex salt bridge formation is cooperative, i.e., the net strength of the complex salt bridge...

  10. Complex analysis

    CERN Document Server

    Freitag, Eberhard

    2005-01-01

    The guiding principle of this presentation of ``Classical Complex Analysis'' is to proceed as quickly as possible to the central results while using a small number of notions and concepts from other fields. Thus the prerequisites for understanding this book are minimal; only elementary facts of calculus and algebra are required. The first four chapters cover the essential core of complex analysis: - differentiation in C (including elementary facts about conformal mappings) - integration in C (including complex line integrals, Cauchy's Integral Theorem, and the Integral Formulas) - sequences and series of analytic functions, (isolated) singularities, Laurent series, calculus of residues - construction of analytic functions: the gamma function, Weierstrass' Factorization Theorem, Mittag-Leffler Partial Fraction Decomposition, and -as a particular highlight- the Riemann Mapping Theorem, which characterizes the simply connected domains in C. Further topics included are: - the theory of elliptic functions based on...

  11. Lecithin Complex

    African Journals Online (AJOL)

    1Department of Food Science and Engineering, Xinyang College of Agriculture and ... Results: The UV and IR spectra of the complex showed an additive effect of polydatin-lecithin, in which .... Monochromatic Cu Ka radiation (wavelength =.

  12. Electrospun complexes - functionalised nanofibres

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, T.; Wolf, M.; Dreyer, B.; Unruh, D.; Krüger, C.; Menze, M. [Leibniz University Hannover, Institute of Inorganic Chemistry (Germany); Sindelar, R. [University of Applied Science Hannover, Faculty II (Germany); Klingelhöfer, G. [Gutenberg-University, Institute of Inorganic and Analytic Chemistry (Germany); Renz, F., E-mail: renz@acd.uni-hannover.de [Leibniz University Hannover, Institute of Inorganic Chemistry (Germany)

    2016-12-15

    Here we present a new approach of using iron-complexes in electro-spun fibres. We modify poly(methyl methacrylate) (PMMA) by replacing the methoxy group with Diaminopropane or Ethylenediamine. The complex is bound covalently via an imine-bridge or an amide. The resulting polymer can be used in the electrospinning process without any further modifications in method either as pure reagent or mixed with small amounts of not functionalised polymer resulting in fibres of different qualities (Fig. 1).

  13. [Intramedullary stabilisation of displaced midshaft clavicular fractures: does the fracture pattern (simple vs. complex) influence the anatomic and functional result].

    Science.gov (United States)

    Langenhan, R; Reimers, N; Probst, A

    2014-12-01

    Displaced midshaft clavicular fractures are often treated operatively. The most common way of treatment is plating. Elastic stable intramedullary nailing (ESIN) is an alternative, but seldom used. Studies showed comparable or even better results for intramedullary nailing than for plating in simple 2- or 3-fragment midshaft fractures. The indication of ESIN for multifragmentary clavicular fractures is discussed critically in the literature because of reduced primary stability and danger of secondary shortening. Until now only few studies report functional results after fracture healing depending on the fracture type. To the best of our knowledge there is no study showing significantly worse functional scores for ESIN in complex displaced midshaft fractures. The objective of this study was to examine anatomic and functional results of simple (2 or 3 fragments, OTA type 15B1 and 15B2) and complex (multifragmentary, OTA type 15B3) displaced midshaft clavicula fractures after internal fixation. Between 2009 and 2012, 40 patients (female/male 10/30; mean age 33 [16-60] years) with closed displaced midshaft clavicular fractures were treated by open reduction and ESIN (Titanium Elastic Nail [TEN], Synthes, Umkirch, Germany). Thirty-seven patients were retrospectively analysed after a mean of 27 (12-43) months. Twenty patients (group A) had simple fractures (OTA type 15B1 and 15B2), 17 patients (group B) had complex fractures (OTA type 15B3). All shoulder joints were postoperatively treated functionally for six weeks without weight limited to 90° abduction/flexion. Both groups were comparable in gender, age, body mass index, months until metal removal, number of physiotherapy sessions and time until follow-up examination. Joint function (neutral zero method) and strength (standing patient with arm in 90° abduction, holding 1-12 kg for 5 sec) in both shoulders were documented. The distance between the centre of the jugulum and the lateral acromial border was measured for

  14. Complexity Control of Fast Motion Estimation in H.264/MPEG-4 AVC with Rate-Distortion-Complexity optimization

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren; Aghito, Shankar Manuel

    2007-01-01

    A complexity control algorithm for H.264 advanced video coding is proposed. The algorithm can control the complexity of integer inter motion estimation for a given target complexity. The Rate-Distortion-Complexity performance is improved by a complexity prediction model, simple analysis of the pa...... statistics and a control scheme. The algorithm also works well for scene change condition. Test results for coding interlaced video (720x576 PAL) are reported.......A complexity control algorithm for H.264 advanced video coding is proposed. The algorithm can control the complexity of integer inter motion estimation for a given target complexity. The Rate-Distortion-Complexity performance is improved by a complexity prediction model, simple analysis of the past...

  15. Principles of VLSI RTL design a practical guide

    CERN Document Server

    Churiwala, Sanjay; Gianfagna, Mike

    2011-01-01

    This book examines the impact of register transfer level (RTL) design choices that may result in issues of testability, data synchronization across clock domains, synthesizability, power consumption and routability, that appear later in the product lifecycle.

  16. Visual Short-Term Memory Complexity

    DEFF Research Database (Denmark)

    Sørensen, Thomas Alrik

    of objective complexity, it seems that subjective complexity - which is dependent on the familiarity of the stimulus - plays a more important role than the objective visual complexity of the objects stored. In two studies, we explored how familiarity influences the capacity of VSTM. 1) In children learning...... and Cavanagh (2004) have raised the question that the capacity of VSTM is dependent on visual complexity rather than the number of objects. We hypothesise that VSTM capacity is dependent on both the objective and subjective complexity of visual stimuli. Contrary to Alvarez and Cavanagh, who argue for the role...... for letters and pictures remained similar. Our results indicate that VSTM capacity for familiar items is larger irrespective of visual complexity....

  17. Latch-up control in CMOS integrated circuits

    International Nuclear Information System (INIS)

    Ochoa, A.; Dawes, W.; Estreich, D.; Packard, H.

    1979-01-01

    The potential for latch-up, a pnpn self-sustaining low impedance state, is inherent in standard bulk CMOS-integrated circuit structures. Under normal bias, the parasitic SCR is in its blocking state but, if subjected to a large voltage spike or if exposed to an ionizing environment, triggering may occur. This may result in device burn-out or loss of state. The problem has been extensively studied for space and weapons applications. Prevention of latch-up has been achieved in conservative design (approx. 9 μm p-well depths) by the use of minority lifetime control methods such as gold doping and neutron irradiation and by modifying the base transport factor with buried layers. The push toward VLSI densities will enhance parasitic action sufficiently so that the problem will become of more universal concern. The paper will surveys latch-up control methods presently employed for weapons and space applications on present (approx. 9 μm p-well) CMOS and indicates the extent of their applicability to VLSI designs

  18. Design and Implementation of a New Real-Time Frequency Sensor Used as Hardware Countermeasure

    Directory of Open Access Journals (Sweden)

    Manuel Pedro-Carrasco

    2013-09-01

    Full Text Available A new digital countermeasure against attacks related to the clock frequency is presented. This countermeasure, known as frequency sensor, consists of a local oscillator, a transition detector, a measurement element and an output block. The countermeasure has been designed using a full-custom technique implemented in an Application-Specific Integrated Circuit (ASIC, and the implementation has been verified and characterized with an integrated design using a 0.35 mm standard Complementary Metal Oxide Semiconductor (CMOS technology (Very Large Scale Implementation—VLSI implementation. The proposed solution is configurable in resolution time and allowed range of period, achieving a minimum resolution time of only 1.91 ns and an initialization time of 5.84 ns. The proposed VLSI implementation shows better results than other solutions, such as digital ones based on semi-custom techniques and analog ones based on band pass filters, all design parameters considered. Finally, a counter has been used to verify the good performance of the countermeasure in avoiding the success of an attack.

  19. A Sequential Circuit-Based IP Watermarking Algorithm for Multiple Scan Chains in Design-for-Test

    Directory of Open Access Journals (Sweden)

    C. Wu

    2011-06-01

    Full Text Available In Very Large Scale Integrated Circuits (VLSI design, the existing Design-for-Test(DFT based watermarking techniques usually insert watermark through reordering scan cells, which causes large resource overhead, low security and coverage rate of watermark detection. A novel scheme was proposed to watermark multiple scan chains in DFT for solving the problems. The proposed scheme adopts DFT scan test model of VLSI design, and uses a Linear Feedback Shift Register (LFSR for pseudo random test vector generation. All of the test vectors are shifted in scan input for the construction of multiple scan chains with minimum correlation. Specific registers in multiple scan chains will be changed by the watermark circuit for watermarking the design. The watermark can be effectively detected without interference with normal function of the circuit, even after the chip is packaged. The experimental results on several ISCAS benchmarks show that the proposed scheme has lower resource overhead, probability of coincidence and higher coverage rate of watermark detection by comparing with the existing methods.

  20. Targeting Complex Sentences in Older School Children with Specific Language Impairment: Results from an Early-Phase Treatment Study

    Science.gov (United States)

    Balthazar, Catherine H.; Scott, Cheryl M.

    2018-01-01

    Purpose: This study investigated the effects of a complex sentence treatment at 2 dosage levels on language performance of 30 school-age children ages 10-14 years with specific language impairment. Method: Three types of complex sentences (adverbial, object complement, relative) were taught in sequence in once or twice weekly dosage conditions.…

  1. Complex manifolds

    CERN Document Server

    Morrow, James

    2006-01-01

    This book, a revision and organization of lectures given by Kodaira at Stanford University in 1965-66, is an excellent, well-written introduction to the study of abstract complex (analytic) manifolds-a subject that began in the late 1940's and early 1950's. It is largely self-contained, except for some standard results about elliptic partial differential equations, for which complete references are given. -D. C. Spencer, MathSciNet The book under review is the faithful reprint of the original edition of one of the most influential textbooks in modern complex analysis and geometry. The classic

  2. Method of complex scaling

    International Nuclear Information System (INIS)

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  3. “Bolshie Klyuchishi” (Ulyanovsk Oblast as a New Archaeological Complex: Preliminary Results

    Directory of Open Access Journals (Sweden)

    Vorobeva Elena E.

    2016-03-01

    Full Text Available The authors introduce for discussion materials of archaeological studies conducted by the team of the Volga Archaeological Expedition of the Mari State University in Ulyanovsk Oblast of the Russian Federation in 2010. Two of the studied archaeological sites seem to be most interesting: they are situated near Bolshie Klyuchishi village (Ulyanovsk District, Ulyanovsk Oblast. Archaeological materials collected during the excavations of these settlements have a very broad time span, which allows suggesting that Bolshie Klyuchishi is a multilayered archaeological complex. Both settlements yielded the Srubnaya culture handmade ceramics of 16th – 13th centuries BC. Moreover, Bolshie Klyuchishi-7 contained items of iron and slag, and Bolshie Klyuchishi-8 yielded sherds of 13th – 14th centuries wheel- made Bulgarian ceramics.

  4. Innovation in a complex environment

    Directory of Open Access Journals (Sweden)

    René Pellissier

    2012-02-01

    Full Text Available Background: As our world becomes more global and competitive yet less predictable, the focus seems to be increasingly on looking to innovation activities to remain competitive. Although there is little doubt that a nation’s competitiveness is embedded in its innovativeness, the complex environment should not be ignored. Complexity is not accounted for in balance sheets or reported in reports; it becomes entrenched in every activity in the organisation. Innovation takes many forms and comes in different shapes.Objectives: The study objectives were, firstly, to establish the determinants for complexity and how these can be addressed from a design point of view in order to ensure innovation success and, secondly, to determine how this changes innovation forms and applications.Method: Two approaches were offered to deal with a complex environment – one allowing for complexity for organisational innovation and the other introducing reductionism to minimise complexity. These approaches were examined in a qualitative study involving case studies, open-ended interviews and content analysis between seven developing economy (South African organisations and seven developed economy (US organisations.Results: This study presented a proposed framework for (organisational innovation in a complex environment versus a framework that minimises complexity. The comparative organisational analysis demonstrated the importance of initiating organisational innovation to address internal and external complexity, with the focus being on the leadership actions, their selected operating models and resultant organisational innovations designs, rather than on technological innovations.Conclusion: This study cautioned the preference for technological innovation within organisations and suggested alternative innovation forms (such as organisational and management innovation be used to remain competitive in a complex environment. 

  5. Complexity of formation in holography

    International Nuclear Information System (INIS)

    Chapman, Shira; Marrochio, Hugo; Myers, Robert C.

    2017-01-01

    It was recently conjectured that the quantum complexity of a holographic boundary state can be computed by evaluating the gravitational action on a bulk region known as the Wheeler-DeWitt patch. We apply this complexity=action duality to evaluate the ‘complexity of formation’ (DOI: 10.1103/PhysRevLett.116.191301; 10.1103/PhysRevD.93.086006), i.e. the additional complexity arising in preparing the entangled thermofield double state with two copies of the boundary CFT compared to preparing the individual vacuum states of the two copies. We find that for boundary dimensions d>2, the difference in the complexities grows linearly with the thermal entropy at high temperatures. For the special case d=2, the complexity of formation is a fixed constant, independent of the temperature. We compare these results to those found using the complexity=volume duality.

  6. Complexity of formation in holography

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, Shira [Perimeter Institute for Theoretical Physics,Waterloo, ON N2L 2Y5 (Canada); Marrochio, Hugo [Perimeter Institute for Theoretical Physics,Waterloo, ON N2L 2Y5 (Canada); Department of Physics & Astronomy and Guelph-Waterloo Physics Institute,University of Waterloo, Waterloo, ON N2L 3G1 (Canada); Myers, Robert C. [Perimeter Institute for Theoretical Physics,Waterloo, ON N2L 2Y5 (Canada)

    2017-01-16

    It was recently conjectured that the quantum complexity of a holographic boundary state can be computed by evaluating the gravitational action on a bulk region known as the Wheeler-DeWitt patch. We apply this complexity=action duality to evaluate the ‘complexity of formation’ (DOI: 10.1103/PhysRevLett.116.191301; 10.1103/PhysRevD.93.086006), i.e. the additional complexity arising in preparing the entangled thermofield double state with two copies of the boundary CFT compared to preparing the individual vacuum states of the two copies. We find that for boundary dimensions d>2, the difference in the complexities grows linearly with the thermal entropy at high temperatures. For the special case d=2, the complexity of formation is a fixed constant, independent of the temperature. We compare these results to those found using the complexity=volume duality.

  7. Modularly Integrated MEMS Technology

    National Research Council Canada - National Science Library

    Eyoum, Marie-Angie N

    2006-01-01

    Process design, development and integration to fabricate reliable MEMS devices on top of VLSI-CMOS electronics without damaging the underlying circuitry have been investigated throughout this dissertation...

  8. A computer graphics pilot project - Spacecraft mission support with an interactive graphics workstation

    Science.gov (United States)

    Hagedorn, John; Ehrner, Marie-Jacqueline; Reese, Jodi; Chang, Kan; Tseng, Irene

    1986-01-01

    The NASA Computer Graphics Pilot Project was undertaken to enhance the quality control, productivity and efficiency of mission support operations at the Goddard Operations Support Computing Facility. The Project evolved into a set of demonstration programs for graphics intensive simulated control room operations, particularly in connection with the complex space missions that began in the 1980s. Complex mission mean more data. Graphic displays are a means to reduce the probabilities of operator errors. Workstations were selected with 1024 x 768 pixel color displays controlled by a custom VLSI chip coupled to an MC68010 chip running UNIX within a shell that permits operations through the medium of mouse-accessed pulldown window menus. The distributed workstations run off a host NAS 8040 computer. Applications of the system for tracking spacecraft orbits and monitoring Shuttle payload handling illustrate the system capabilities, noting the built-in capabilities of shifting the point of view and rotating and zooming in on three-dimensional views of spacecraft.

  9. Spectral simplicity of apparent complexity. II. Exact complexities and complexity spectra

    Science.gov (United States)

    Riechers, Paul M.; Crutchfield, James P.

    2018-03-01

    The meromorphic functional calculus developed in Part I overcomes the nondiagonalizability of linear operators that arises often in the temporal evolution of complex systems and is generic to the metadynamics of predicting their behavior. Using the resulting spectral decomposition, we derive closed-form expressions for correlation functions, finite-length Shannon entropy-rate approximates, asymptotic entropy rate, excess entropy, transient information, transient and asymptotic state uncertainties, and synchronization information of stochastic processes generated by finite-state hidden Markov models. This introduces analytical tractability to investigating information processing in discrete-event stochastic processes, symbolic dynamics, and chaotic dynamical systems. Comparisons reveal mathematical similarities between complexity measures originally thought to capture distinct informational and computational properties. We also introduce a new kind of spectral analysis via coronal spectrograms and the frequency-dependent spectra of past-future mutual information. We analyze a number of examples to illustrate the methods, emphasizing processes with multivariate dependencies beyond pairwise correlation. This includes spectral decomposition calculations for one representative example in full detail.

  10. Bio-Inspired Neural Model for Learning Dynamic Models

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  11. Organotin complexes with phosphines

    International Nuclear Information System (INIS)

    Passos, B. de F.T.; Jesus Filho, M.F. de; Filgueiras, C.A.L.; Abras, A.

    1988-01-01

    A series of organotin complexes was prepared involving phosphines bonded to the organotin moiety. The series include derivatives of SnCl x Ph 4-x (where x varied from zero to four with the phosphines Ph 3 P, (Ph 2 P)CH 2 , (Ph 2 P) 2 (CH 2 ) 2 , cis-(Ph 2 P)CH 2 , and CH 3 C(CH 2 PPh 2 ) 3 . A host of new complexes was obtained, showing different stoichiometries, bonding modes, and coordination numbers around the tin atom. These complexes were characterized by several different chemical and physical methods. The 119 Sn Moessbauer parameters varied differently. Whereas isomer shift values did not great variation for each group of complexs with the same organotin parent (SnCl x Ph 4-x ), reflecting a small change in s charge distribution on the Sn atom upon complexation, quadrupole splitting results varied widely, however, when the parent organotin compound was wholly symmetric (SnCl 4 and SnPPh 4 ), the complexes also tended to show quadrupole splitting values approaching zero. (author)

  12. Clearing the complexity: immune complexes and their treatment in lupus nephritis

    Directory of Open Access Journals (Sweden)

    Catherine Toong

    2011-01-01

    Full Text Available Catherine Toong1, Stephen Adelstein1, Tri Giang Phan21Department of Clinical Immunology, Royal Prince Alfred Hospital, Missenden Rd, Camperdown, NSW, Australia; 2Immunology Program, Garvan Institute of Medical Research and St. Vincent’s Clinical School, University of New South Wales, Darlinghurst, NSW, AustraliaAbstract: Systemic lupus erythematosus (SLE is a classic antibody-mediated systemic autoimmune disease characterised by the development of autoantibodies to ubiquitous self-antigens (such as antinuclear antibodies and antidouble-stranded DNA antibodies and widespread deposition of immune complexes in affected tissues. Deposition of immune complexes in the kidney results in glomerular damage and occurs in all forms of lupus nephritis. The development of nephritis carries a poor prognosis and high risk of developing end-stage renal failure despite recent therapeutic advances. Here we review the role of DNA-anti-DNA immune complexes in the pathogenesis of lupus nephritis and possible new treatment strategies aimed at their control.Keywords: immune complex, systemic lupus erythematosus, nephritis, therapy

  13. Immune Algorithm Complex Method for Transducer Calibration

    Directory of Open Access Journals (Sweden)

    YU Jiangming

    2014-08-01

    Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.

  14. The complexity of Orion: an ALMA view. I. Data and first results

    Science.gov (United States)

    Pagani, L.; Favre, C.; Goldsmith, P. F.; Bergin, E. A.; Snell, R.; Melnick, G.

    2017-07-01

    Context. We wish to improve our understanding of the Orion central star formation region (Orion-KL) and disentangle its complexity. Aims: We collected data with ALMA during cycle 2 in 16 GHz of total bandwidth spread between 215.1 and 252.0 GHz with a typical sensitivity of 5 mJy/beam (2.3 mJy/beam from 233.4 to 234.4 GHz) and a typical beam size of 1.̋7 × 1.̋0 (average position angle of 89°). We produced a continuum map and studied the emission lines in nine remarkable infrared spots in the region including the hot core and the compact ridge, plus the recently discovered ethylene glycol peak. Methods: We present the data, and report the detection of several species not previously seen in Orion, including n- and I-propyl cyanide (C3H7CN), and the tentative detection of a number of other species including glycolaldehyde (CH2(OH)CHO). The first detections of gGg' ethylene glycol (gGg' (CH2OH)2) and of acetic acid (CH3COOH) in Orion are presented in a companion paper. We also report the possible detection of several vibrationally excited states of cyanoacetylene (HC3N), and of its 13C isotopologues. We were not able to detect the 16O18O line predicted by our detection of O2 with Herschel, due to blending with a nearby line of vibrationally excited ethyl cyanide. We do not confirm the tentative detection of hexatriynyl (C6H) and cyanohexatriyne (HC7N) reported previously, or of hydrogen peroxide (H2O2) emission. Results: We report a complex velocity structure only partially revealed before. Components as extreme as -7 and +19 km s-1 are detected inside the hot region. Thanks to different opacities of various velocity components, in some cases we can position these components along the line of sight. We propose that the systematically redshifted and blueshifted wings of several species observed in the northern part of the region are linked to the explosion that occurred 500 yr ago. The compact ridge, noticeably farther south displays extremely narrow lines ( 1 km s

  15. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    ... VLSI clock interconnects; delay variability; PDF; process variation; Gaussian random ... Supercomputer Education and Research Centre, Indian Institute of Science, ... Manuscript received: 27 February 2009; Manuscript revised: 9 February ...

  16. Complexity rating of abnormal events and operator performance

    International Nuclear Information System (INIS)

    Oeivind Braarud, Per

    1998-01-01

    The complexity of the work situation during abnormal situations is a major topic in a discussion of safety aspects of Nuclear Power plants. An understanding of complexity and its impact on operator performance in abnormal situations is important. One way to enhance understanding is to look at the dimensions that constitute complexity for NPP operators, and how those dimensions can be measured. A further step is to study how dimensions of complexity of the event are related to performance of operators. One aspect of complexity is the operator 's subjective experience of given difficulties of the event. Another related aspect of complexity is subject matter experts ratings of the complexity of the event. A definition and a measure of this part of complexity are being investigated at the OECD Halden Reactor Project in Norway. This paper focus on the results from a study of simulated scenarios carried out in the Halden Man-Machine Laboratory, which is a full scope PWR simulator. Six crews of two licensed operators each performed in 16 scenarios (simulated events). Before the experiment subject matter experts rated the complexity of the scenarios, using a Complexity Profiling Questionnaire. The Complexity Profiling Questionnaire contains eight previously identified dimensions associated with complexity. After completing the scenarios the operators received a questionnaire containing 39 questions about perceived complexity. This questionnaire was used for development of a measure of subjective complexity. The results from the study indicated that Process experts' rating of scenario complexity, using the Complexity Profiling Questionnaire, were able to predict crew performance quite well. The results further indicated that a measure of subjective complexity could be developed that was related to crew performance. Subjective complexity was found to be related to subjective work load. (author)

  17. Research on image complexity evaluation method based on color information

    Science.gov (United States)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  18. Complexity measures of music

    Science.gov (United States)

    Pease, April; Mahmoodi, Korosh; West, Bruce J.

    2018-03-01

    We present a technique to search for the presence of crucial events in music, based on the analysis of the music volume. Earlier work on this issue was based on the assumption that crucial events correspond to the change of music notes, with the interesting result that the complexity index of the crucial events is mu ~ 2, which is the same inverse power-law index of the dynamics of the brain. The search technique analyzes music volume and confirms the results of the earlier work, thereby contributing to the explanation as to why the brain is sensitive to music, through the phenomenon of complexity matching. Complexity matching has recently been interpreted as the transfer of multifractality from one complex network to another. For this reason we also examine the mulifractality of music, with the observation that the multifractal spectrum of a computer performance is significantly narrower than the multifractal spectrum of a human performance of the same musical score. We conjecture that although crucial events are demonstrably important for information transmission, they alone are not suficient to define musicality, which is more adequately measured by the multifractality spectrum.

  19. Algorithms and Complexity Results for Genome Mapping Problems.

    Science.gov (United States)

    Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric

    2017-01-01

    Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.

  20. Influence of FGR complexity modelling on the practical results in gas pressure calculation of selected fuel elements from Dukovany NPP

    International Nuclear Information System (INIS)

    Lahodova, M.

    2001-01-01

    A modernization fuel system and advanced fuel for operation up to the high burnup are used in present time in Dukovany NPP. Reloading of the cores are evaluated using computer codes for thermomechanical behavior of the most loaded fuel rods. The paper presents results of parametric calculations performed by the NRI Rez integral code PIN, version 2000 (PIN2k) to assess influence of fission gas release modelling complexity on achieved results. The representative Dukovany NPP fuel rod irradiation history data are used and two cases of fuel parameter variables (soft and hard) are chosen for the comparison. Involved FGR models where the GASREL diffusion model developed in the NRI Rez plc and standard Weisman model that is recommended in the previous version of the PIN integral code. FGR calculation by PIN2k with GASREL model represents more realistic results than standard Weisman's model. Results for linear power, fuel centre temperature, FGR and gas pressure versus burnup are given for two fuel rods

  1. A new information dimension of complex networks

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Daijun [School of Computer and Information Science, Southwest University, Chongqing 400715 (China); School of Science, Hubei University for Nationalities, Enshi 445000 (China); Wei, Bo [School of Computer and Information Science, Southwest University, Chongqing 400715 (China); Hu, Yong [Institute of Business Intelligence and Knowledge Discovery, Guangdong University of Foreign Studies, Guangzhou 510006 (China); Zhang, Haixin [School of Computer and Information Science, Southwest University, Chongqing 400715 (China); Deng, Yong, E-mail: ydeng@swu.edu.cn [School of Computer and Information Science, Southwest University, Chongqing 400715 (China); School of Engineering, Vanderbilt University, TN 37235 (United States)

    2014-03-01

    Highlights: •The proposed measure is more practical than the classical information dimension. •The difference of information for box in the box-covering algorithm is considered. •Results indicate the measure can capture the fractal property of complex networks. -- Abstract: The fractal and self-similarity properties are revealed in many complex networks. The classical information dimension is an important method to study fractal and self-similarity properties of planar networks. However, it is not practical for real complex networks. In this Letter, a new information dimension of complex networks is proposed. The nodes number in each box is considered by using the box-covering algorithm of complex networks. The proposed method is applied to calculate the fractal dimensions of some real networks. Our results show that the proposed method is efficient when dealing with the fractal dimension problem of complex networks.

  2. A new information dimension of complex networks

    International Nuclear Information System (INIS)

    Wei, Daijun; Wei, Bo; Hu, Yong; Zhang, Haixin; Deng, Yong

    2014-01-01

    Highlights: •The proposed measure is more practical than the classical information dimension. •The difference of information for box in the box-covering algorithm is considered. •Results indicate the measure can capture the fractal property of complex networks. -- Abstract: The fractal and self-similarity properties are revealed in many complex networks. The classical information dimension is an important method to study fractal and self-similarity properties of planar networks. However, it is not practical for real complex networks. In this Letter, a new information dimension of complex networks is proposed. The nodes number in each box is considered by using the box-covering algorithm of complex networks. The proposed method is applied to calculate the fractal dimensions of some real networks. Our results show that the proposed method is efficient when dealing with the fractal dimension problem of complex networks.

  3. Theories of computational complexity

    CERN Document Server

    Calude, C

    1988-01-01

    This volume presents four machine-independent theories of computational complexity, which have been chosen for their intrinsic importance and practical relevance. The book includes a wealth of results - classical, recent, and others which have not been published before.In developing the mathematics underlying the size, dynamic and structural complexity measures, various connections with mathematical logic, constructive topology, probability and programming theories are established. The facts are presented in detail. Extensive examples are provided, to help clarify notions and constructions. The lists of exercises and problems include routine exercises, interesting results, as well as some open problems.

  4. A Memristor-Based Hyperchaotic Complex Lü System and Its Adaptive Complex Generalized Synchronization

    Directory of Open Access Journals (Sweden)

    Shibing Wang

    2016-02-01

    Full Text Available This paper introduces a new memristor-based hyperchaotic complex Lü system (MHCLS and investigates its adaptive complex generalized synchronization (ACGS. Firstly, the complex system is constructed based on a memristor-based hyperchaotic real Lü system, and its properties are analyzed theoretically. Secondly, its dynamical behaviors, including hyperchaos, chaos, transient phenomena, as well as periodic behaviors, are explored numerically by means of bifurcation diagrams, Lyapunov exponents, phase portraits, and time history diagrams. Thirdly, an adaptive controller and a parameter estimator are proposed to realize complex generalized synchronization and parameter identification of two identical MHCLSs with unknown parameters based on Lyapunov stability theory. Finally, the numerical simulation results of ACGS and its applications to secure communication are presented to verify the feasibility and effectiveness of the proposed method.

  5. Electronic shift register memory based on molecular electron-transfer reactions

    Science.gov (United States)

    Hopfield, J. J.; Onuchic, Jose Nelson; Beratan, David N.

    1989-01-01

    The design of a shift register memory at the molecular level is described in detail. The memory elements are based on a chain of electron-transfer molecules incorporated on a very large scale integrated (VLSI) substrate, and the information is shifted by photoinduced electron-transfer reactions. The design requirements for such a system are discussed, and several realistic strategies for synthesizing these systems are presented. The immediate advantage of such a hybrid molecular/VLSI device would arise from the possible information storage density. The prospect of considerable savings of energy per bit processed also exists. This molecular shift register memory element design solves the conceptual problems associated with integrating molecular size components with larger (micron) size features on a chip.

  6. Pinning Synchronization of Switched Complex Dynamical Networks

    Directory of Open Access Journals (Sweden)

    Liming Du

    2015-01-01

    Full Text Available Network topology and node dynamics play a key role in forming synchronization of complex networks. Unfortunately there is no effective synchronization criterion for pinning synchronization of complex dynamical networks with switching topology. In this paper, pinning synchronization of complex dynamical networks with switching topology is studied. Two basic problems are considered: one is pinning synchronization of switched complex networks under arbitrary switching; the other is pinning synchronization of switched complex networks by design of switching when synchronization cannot achieved by using any individual connection topology alone. For the two problems, common Lyapunov function method and single Lyapunov function method are used respectively, some global synchronization criteria are proposed and the designed switching law is given. Finally, simulation results verify the validity of the results.

  7. Complex centers of polynomial differential equations

    Directory of Open Access Journals (Sweden)

    Mohamad Ali M. Alwash

    2007-07-01

    Full Text Available We present some results on the existence and nonexistence of centers for polynomial first order ordinary differential equations with complex coefficients. In particular, we show that binomial differential equations without linear terms do not have complex centers. Classes of polynomial differential equations, with more than two terms, are presented that do not have complex centers. We also study the relation between complex centers and the Pugh problem. An algorithm is described to solve the Pugh problem for equations without complex centers. The method of proof involves phase plane analysis of the polar equations and a local study of periodic solutions.

  8. Global floor planning approach for VLSI design

    International Nuclear Information System (INIS)

    LaPotin, D.P.

    1986-01-01

    Within a hierarchical design environment, initial decisions regarding the partitioning and choice of module attributes greatly impact the quality of the resulting IC in terms of area and electrical performance. This dissertation presents a global floor-planning approach which allows designers to quickly explore layout issues during the initial stages of the IC design process. In contrast to previous efforts, which address the floor-planning problem from a strict module placement point of view, this approach considers floor-planning from an area planning point of view. The approach is based upon a combined min-cut and slicing paradigm, which ensures routability. To provide flexibility, modules may be specified as having a number of possible dimensions and orientations, and I/O pads as well as layout constraints are considered. A slicing-tree representation is employed, upon which a sequence of traversal operations are applied in order to obtain an area efficient layout. An in-place partitioning technique, which provides an improvement over previous min-cut and slicing-based efforts, is discussed. Global routing and module I/O pin assignment are provided for floor-plan evaluation purposes. A computer program, called Mason, has been developed which efficiently implements the approach and provides an interactive environment for designers to perform floor-planning. Performance of this program is illustrated via several industrial examples

  9. Complex chemistry

    International Nuclear Information System (INIS)

    Kim, Bong Gon; Kim, Jae Sang; Kim, Jin Eun; Lee, Boo Yeon

    2006-06-01

    This book introduces complex chemistry with ten chapters, which include development of complex chemistry on history coordination theory and Warner's coordination theory and new development of complex chemistry, nomenclature on complex with conception and define, chemical formula on coordination compound, symbol of stereochemistry, stereo structure and isomerism, electron structure and bond theory on complex, structure of complex like NMR and XAFS, balance and reaction on solution, an organo-metallic chemistry, biology inorganic chemistry, material chemistry of complex, design of complex and calculation chemistry.

  10. Modeling Complex Systems

    CERN Document Server

    Boccara, Nino

    2010-01-01

    Modeling Complex Systems, 2nd Edition, explores the process of modeling complex systems, providing examples from such diverse fields as ecology, epidemiology, sociology, seismology, and economics. It illustrates how models of complex systems are built and provides indispensable mathematical tools for studying their dynamics. This vital introductory text is useful for advanced undergraduate students in various scientific disciplines, and serves as an important reference book for graduate students and young researchers. This enhanced second edition includes: . -recent research results and bibliographic references -extra footnotes which provide biographical information on cited scientists who have made significant contributions to the field -new and improved worked-out examples to aid a student’s comprehension of the content -exercises to challenge the reader and complement the material Nino Boccara is also the author of Essentials of Mathematica: With Applications to Mathematics and Physics (Springer, 2007).

  11. The complexities of complex span: explaining individual differences in working memory in children and adults.

    Science.gov (United States)

    Bayliss, Donna M; Jarrold, Christopher; Gunn, Deborah M; Baddeley, Alan D

    2003-03-01

    Two studies are presented that investigated the constraints underlying working memory performance in children and adults. In each case, independent measures of processing efficiency and storage capacity are assessed to determine their relative importance in predicting performance on complex span tasks,which measure working memory capacity. Results show that complex span performance was independently constrained by individual differences in domain-general processing efficiency and domain-specific storage capacity. Residual variance, which may reflect the ability to coordinate storage and processing, also predicted academic achievement. These results challenge the view that complex span taps a limited-capacity resource pool shared between processing and storage operations. Rather, they are consistent with a multiple-component model in which separate resource pools support the processing and storage functions of working memory.

  12. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    OpenAIRE

    Giacomo Indiveri

    2008-01-01

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computation...

  13. Complexity in neuronal noise depends on network interconnectivity.

    Science.gov (United States)

    Serletis, Demitre; Zalay, Osbert C; Valiante, Taufik A; Bardakjian, Berj L; Carlen, Peter L

    2011-06-01

    "Noise," or noise-like activity (NLA), defines background electrical membrane potential fluctuations at the cellular level of the nervous system, comprising an important aspect of brain dynamics. Using whole-cell voltage recordings from fast-spiking stratum oriens interneurons and stratum pyramidale neurons located in the CA3 region of the intact mouse hippocampus, we applied complexity measures from dynamical systems theory (i.e., 1/f(γ) noise and correlation dimension) and found evidence for complexity in neuronal NLA, ranging from high- to low-complexity dynamics. Importantly, these high- and low-complexity signal features were largely dependent on gap junction and chemical synaptic transmission. Progressive neuronal isolation from the surrounding local network via gap junction blockade (abolishing gap junction-dependent spikelets) and then chemical synaptic blockade (abolishing excitatory and inhibitory post-synaptic potentials), or the reverse order of these treatments, resulted in emergence of high-complexity NLA dynamics. Restoring local network interconnectivity via blockade washout resulted in resolution to low-complexity behavior. These results suggest that the observed increase in background NLA complexity is the result of reduced network interconnectivity, thereby highlighting the potential importance of the NLA signal to the study of network state transitions arising in normal and abnormal brain dynamics (such as in epilepsy, for example).

  14. A segmented Hybrid Photon Detector with integrated auto-triggering front-end electronics for a PET scanner

    CERN Document Server

    Chesi, Enrico Guido; Joram, C; Mathot, S; Séguinot, Jacques; Weilhammer, P; Ciocia, F; De Leo, R; Nappi, E; Vilardi, I; Argentieri, A; Corsi, F; Dragone, A; Pasqua, D

    2006-01-01

    We describe the design, fabrication and test results of a segmented Hybrid Photon Detector with integrated auto-triggering front-end electronics. Both the photodetector and its VLSI readout electronics are custom designed and have been tailored to the requirements of a recently proposed novel geometrical concept of a Positron Emission Tomograph. Emphasis is put on the PET specific features of the device. The detector has been fabricated in the photocathode facility at CERN.

  15. Bit-Serial Adder Based on Quantum Dots

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarress, Katayoon; Spotnitz, Mathew

    2003-01-01

    A proposed integrated circuit based on quantum-dot cellular automata (QCA) would function as a bit-serial adder. This circuit would serve as a prototype building block for demonstrating the feasibility of quantum-dots computing and for the further development of increasingly complex and increasingly capable quantum-dots computing circuits. QCA-based bit-serial adders would be especially useful in that they would enable the development of highly parallel and systolic processors for implementing fast Fourier, cosine, Hartley, and wavelet transforms. The proposed circuit would complement the QCA-based circuits described in "Implementing Permutation Matrices by Use of Quantum Dots" (NPO-20801), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 42 and "Compact Interconnection Networks Based on Quantum Dots" (NPO-20855), which appears elsewhere in this issue. Those articles described the limitations of very-large-scale-integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCA-based signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes. To enable a meaningful description of the proposed bit-serial adder, it is necessary to further recapitulate the description of a quantum-dot cellular automation from the first-mentioned prior article: A quantum-dot cellular automaton contains four quantum dots positioned at the corners of a square cell. The cell contains two extra mobile electrons that can tunnel (in the

  16. The Seis Lagos Carbonatite Complex

    International Nuclear Information System (INIS)

    Issler, R.S.; Silva, G.G. da.

    1980-01-01

    The Seis Lagos Carbonatite Complex located about 840 Km from Manaus, on the northwestern part of the Estado do Amazonas, Brazil is described. Geological reconnaissance mapping by Radam Project/DNPM, of the southwestern portion of the Guianes Craton, determined three circular features arranged in a north-south trend and outcroping as thick lateritic radioactive hills surrounded by gneisses and mignatites of the peneplained Guianense Complex. Results of core drilling samples analysis of the Seis Lagos Carbonatite Complex are compared with some igneous rocks and limestones of the world on the basis of abundance of their minor and trace elements. Log-log variation diagram of strontium and barium in carbonatite and limestone, exemplifield by South Africa and Angola carbonatites, are compared with the Seis Lagos Carbonatite Complex. The Seis Lagos Carbonatite Complex belongs to the siderite-soevite type. (E.G.) [pt

  17. Results of clinical approbation of new local treatment method in the complex therapy of inflammatory parodontium diseases

    Directory of Open Access Journals (Sweden)

    Yu. G. Romanova

    2017-08-01

    Full Text Available Treatment and prevention of inflammatory diseases of parodontium are one of the most difficult problems in stomatology today. Purpose of research: estimation of clinical efficiency of local combined application of developed agent apigel for oral cavity care and low-frequency electromagnetic field magnetotherapy at treatment of inflammatory diseases of parodontium. Materials and methods: 46 patients with chronic generalized catarrhal gingivitis and chronic generalized periodontitis of 1st degree were included into the study. Patients were divided into 2 groups depending on treatment management: basic (n = 23 and control (n = 23. Conventional treatment with the local use of the dental gel with camomile was used in the control group. Patients of the basic group were treated with local combined application of apigel and magnetotherapy. Efficiency was estimated with clinical, laboratory, microbiological and functional (ultrasonic Doppler examination methods of examination. Results: The application of the apigel and pulsating electromagnetic field in the complex medical treatment of patients with chronic generalized periodontitis (GhGP caused positive changes in clinical symptom and condition of parodontal tissues, that was accompanied by decline of hygienic and parodontal indexes. As compared with patients who had traditional anti-inflammatory therapy, patients who were treated with local application of apigel and magnetoterapy had decline of edema incidence. It was revealed that decrease of the pain correlated with improvement of hygienic condition of oral cavity and promoted prevention of bacterial contamination of damaged mucous membranes. Estimation of microvasculatory blood stream with the method of ultrasonic doppler flowmetry revealed more rapid normalization of volume and linear high systole, speed of blood stream in the parodontal tissues in case of use of new complex local method. Conclusions: Effect of the developed local agent in patients

  18. Mental disturbances and perceived complexity of nursing care in medical inpatients : results from a European study

    NARCIS (Netherlands)

    De Jonge, P; Zomerdijk, MM; Huyse, FJ; Fink, P; Herzog, T; Lobo, A; Slaets, JPJ; Arolt, [No Value; Balogh, N; Cardoso, G; Rigatelli, M

    Aims and objectives. The relationship between mental disturbances-anxiety and depression, somatization and alcohol abuse-on admission to internal medicine units and perceived complexity of care as indicated by the nurse at discharge was studied. The goal Was to Study the utility of short screeners

  19. Mental disturbances and perceived complexity of nursing care in medical inpatients : results from a European study

    NARCIS (Netherlands)

    De Jonge, P; Zomerdijk, MM; Huyse, FJ; Fink, P; Herzog, T; Lobo, A; Slaets, JPJ; Arolt, [No Value; Balogh, N; Cardoso, G; Rigatelli, M

    2001-01-01

    Aims and objectives. The relationship between mental disturbances-anxiety and depression, somatization and alcohol abuse-on admission to internal medicine units and perceived complexity of care as indicated by the nurse at discharge was studied. The goal Was to Study the utility of short screeners

  20. The structure of complex Lie groups

    CERN Document Server

    Lee, Dong Hoon

    2001-01-01

    Complex Lie groups have often been used as auxiliaries in the study of real Lie groups in areas such as differential geometry and representation theory. To date, however, no book has fully explored and developed their structural aspects.The Structure of Complex Lie Groups addresses this need. Self-contained, it begins with general concepts introduced via an almost complex structure on a real Lie group. It then moves to the theory of representative functions of Lie groups- used as a primary tool in subsequent chapters-and discusses the extension problem of representations that is essential for studying the structure of complex Lie groups. This is followed by a discourse on complex analytic groups that carry the structure of affine algebraic groups compatible with their analytic group structure. The author then uses the results of his earlier discussions to determine the observability of subgroups of complex Lie groups.The differences between complex algebraic groups and complex Lie groups are sometimes subtle ...

  1. SSD as position detector for ASTROMAG

    International Nuclear Information System (INIS)

    Tanimori, Tohru

    1987-01-01

    Astromag is designed to be used in space stations. A reduction in the size of an apparatus will decrease the costs for satelite launching and concenquently increase the feasibility of the program. Compared to a wire chamber, a silicon strip detector (SSD) can be smaller in volume by more than 90 percent than a wire chamber. The energy available will by largely limited in a space station, but a circuit for wire chamber requires a power of several W-CH. The power consumption, on the other hand, will be about 1 mW-CH if CMOS VLSI is used in the readout circuits. Furthermore, a wire chamber consists of a large number of components while SSD is basically a simple pile of Si plates, leading to a low frequency of troubles. Since each stripe and VLSI is an independent module, it is not likely that malfunction of the entite system will be caused by a small trouble in a module. Techniques required for developing SSD or other components such as VLSI devices can serve for various purposes in the field of semiconductor industries. The existence of an industrial basis to support their development is advantageous not only in technical aspects but also for cost reduction. Their structures, major features and problems remaining to be solved are also briefly outlined. (Nogami, K.)

  2. 2nd International Symposium on Chaos, Complexity and Leadership

    CERN Document Server

    Banerjee, Santo

    2015-01-01

    These proceedings from the 2013 symposium on "Chaos, complexity and leadership" reflect current research results from all branches of Chaos, Complex Systems and their applications in Management. Included are the diverse results in the fields of applied nonlinear methods, modeling of data and simulations, as well as theoretical achievements of Chaos and Complex Systems. Also highlighted are Leadership and Management applications of Chaos and Complexity Theory.

  3. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  4. Detection of irradiated constituents in processed food with complex lipid matrices. Results of a research project of Baden-Wuerttemberg

    International Nuclear Information System (INIS)

    Hartmann, M.; Ammon, J.; Berg, H.

    1999-01-01

    The detection of irradiated constituents in processed food with a complex lipid matrix can be adversely affected by two conditions. The small amounts of radiation-induced hydrocarbons are diluted by the fat matrix of the food, or there are substances accompanying the lipids in the matrix and thus make the analysis more difficult. In those cases, sample preparation alone by means of Florisil SPE (solid-phase extraction) is not enough and requires additional, subsequent SPE argentation chromatography of the Florisil eluate, as this latter analytical method permits reliable detection down to very small amounts of irradiated, fat-containing constituents even in a complex lipid matrix. SPE-Florisil/argentation chromatography detects and selects radiation-induced hydrocarbons in a complex lipid matrix, so that detection of irradiation at even very low doses down to 0.025 kGy is possible. The method described is highly sensitive, inexpensive, and easy to apply. It efficiently substitutes such complex preparation or measuring methods as SFE-GC/MS or LC-GC/MS. This highly sensitive testing method for detection of food irradiation can be carried out in almost any analytical laboratory. (orig./CB) [de

  5. In Vitro Interactions between 17β-Estradiol and DNA Result in Formation of the Hormone-DNA Complexes

    Directory of Open Access Journals (Sweden)

    Zbynek Heger

    2014-07-01

    Full Text Available Beyond the role of 17β-estradiol (E2 in reproduction and during the menstrual cycle, it has been shown to modulate numerous physiological processes such as cell proliferation, apoptosis, inflammation and ion transport in many tissues. The pathways in which estrogens affect an organism have been partially described, although many questions still exist regarding estrogens’ interaction with biomacromolecules. Hence, the present study showed the interaction of four oligonucleotides (17, 20, 24 and/or 38-mer with E2. The strength of these interactions was evaluated using optical methods, showing that the interaction is influenced by three major factors, namely: oligonucleotide length, E2 concentration and interaction time. In addition, the denaturation phenomenon of DNA revealed that the binding of E2 leads to destabilization of hydrogen bonds between the nitrogenous bases of DNA strands resulting in a decrease of their melting temperatures (Tm. To obtain a more detailed insight into these interactions, MALDI-TOF mass spectrometry was employed. This study revealed that E2 with DNA forms non-covalent physical complexes, observed as the mass shifts for app. 270 Da (Mr of E2 to higher molecular masses. Taken together, our results indicate that E2 can affect biomacromolecules, as circulating oligonucleotides, which can trigger mutations, leading to various unwanted effects.

  6. Evolution of temperature responses in the Cladophora vagabunda complex and the C-albida/sericea complex (Chlorophyta)

    NARCIS (Netherlands)

    Breeman, AM; Oh, YS; Hwang, MS; Van den Hoek, C

    Differentiation in temperature responses (survival and growth) was investigated among isolates of two tropical to temperate green algal lineages: the Cladophora vagabunda complex and the C. albida/sericea complex. The results were analysed in relation to published data on 18S rRNA and ITS sequence

  7. Forest, Trees, Dynamics: Results from a novel Wisconsin Card Sorting Test variant Protocol for Studying Global-Local Attention and Complex Cognitive Processes

    Directory of Open Access Journals (Sweden)

    Benjamin eCowley

    2016-02-01

    Full Text Available BackgroundRecognition of objects and their context relies heavily on the integrated functioning of global and local visual processing. In a realistic setting such as work, this processing becomes a sustained activity, implying a consequent interaction with executive functions.MotivationThere have been many studies of either global-local attention or executive functions; however it is relatively novel to combine these processes to study a more ecological form of attention. We aim to explore the phenomenon of global-local processing during a task requiring sustained attention and working memory.MethodsWe develop and test a novel protocol for global-local dissociation, with task structure including phases of divided ('rule search' and selective ('rule found' attention, based on the Wisconsin Card Sorting Task.We test it in a laboratory study with 25 participants, and report on behaviour measures (physiological data was also gathered, but not reported here. We develop novel stimuli with more naturalistic levels of information and noise, based primarily on face photographs, with consequently more ecological validity.ResultsWe report behavioural results indicating that sustained difficulty when participants test their hypotheses impacts matching-task performance, and diminishes the global precedence effect. Results also show a dissociation between subjectively experienced difficulty and objective dimension of performance, and establish the internal validity of the protocol.ContributionWe contribute an advance in the state of the art for testing global-local attention processes in concert with complex cognition. With three results we establish a connection between global-local dissociation and aspects of complex cognition. Our protocol also improves ecological validity and opens options for testing additional interactions in future work.

  8. Configuration Entropy Calculations for Complex Compounds Technetium

    International Nuclear Information System (INIS)

    Muhayatun; Susanto Imam Rahayu; Surdia, N.M.; Abdul Mutalib

    2002-01-01

    Recently, the study of technetium complexes is rapidly increasing, due to the benefit of 99m Tc complexes (one of Tc nuclear isomers), which are widely used for diagnostics. Study of the structure-stability relationship of Tc complexes based on solid angle has been done by Kung using a Solid Angle Factor Sum (SAS). The SAS is hypothesized to be related to stability. SAS has been used by several researchers either for synthesis or designing the reaction route of the Tc complex formation and predicting the geometry of complex structures. Although the advantages of the SAS were very gratifying, but the model does not have the theoretical basis which is able to explain the correlation of steric parameters to physicochemical properties of complexes especially to those connected to a complex's stability. To improve the SAS model, in this research the model was modified by providing a theoretical basis for SAS. The results obtained from the correlation of the SAS value to the thermodynamic stability parameters of simple complexes show the values to have a similar trend as the standard entropy (S 0 ). The entropy approximation model was created by involving some factors which are not used in Kung's model. Entropy optimization to the bond length (ML) has also been done to several complexes. The calculations of SAS value using the calculated R for more than 100 Tc complexes provide a normalized mean value of 0.8545 ± 0.0851 and have similar curve profiles as those of Kung's model. The entropy value can be obtained by multiplying the natural logarithm of the a priori degeneracy of a certain distribution (Ω) and the Boltzmann constant. The results of Ω and In Ω of the Tc complexes have a narrow range. The results of this research are able to provide a basic concept for the SAS to explain the structure-stability relationship and to improve Kung's model. (author)

  9. Mining sensor data from complex systems

    NARCIS (Netherlands)

    Vespier, Ugo

    2015-01-01

    Today, virtually everything, from natural phenomena to complex artificial and physical systems, can be measured and the resulting information collected, stored and analyzed in order to gain new insight. This thesis shows how complex systems often exhibit diverse behavior at different temporal

  10. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    Directory of Open Access Journals (Sweden)

    Xie Xiang

    2007-01-01

    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate ( bits/pixel with high image quality (larger than dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in m CMOS process.

  11. Improving Overlay in Nanolithography with a Deformable Mask Holder

    National Research Council Canada - National Science Library

    Harriott, L. R

    2004-01-01

    In very fine-line VLSI photolithography, alignment and overlay errors due to distortion in the projected image of a photomask relative to an existing pattern on a silicon wafer are becoming such serious problems...

  12. Multi-Objective Hypergraph Partitioning Algorithms for Cut and Maximum Subdomain Degree Minimization

    National Research Council Canada - National Science Library

    Selvakkumaran, Navaratnasothie; Karypis, George

    2004-01-01

    ... subdomain degree are simultaneously minimized. This type of partitionings are critical for existing and emerging applications in VLSI CAD as they allow to both minimize and evenly distribute the interconnects across the physical devices...

  13. A study of using femtosecond LIBS in analyzing metallic thin film-semiconductor interface

    Science.gov (United States)

    Galmed, A. H.; Kassem, A. K.; von Bergmann, H.; Harith, M. A.

    2011-01-01

    Metals and metal alloys are usually employed as interconnections to guide electrical signals between components into the very large scale integrated (VLSI) devices. These devices demand higher complexity, better performance and lower cost. Thin film is a common geometry for these metallic applications, requiring a substrate for rigidity. Accurate depth profile analysis of coatings is becoming increasingly important with expanding industrial use in technological fields. A number of articles devoted to LIBS applications for depth-resolved analysis have been published in recent years. In the present work, we are studying the ability of femtosecond LIBS to make depth profiling for a Ti thin film of thickness 213 nm deposited onto a silicon (100) substrate before and after thermal annealing. The measurements revealed that an average ablation rates of 15 nm per pulse have been achieved. The thin film was examined using X-Ray Diffraction (XRD) and Atomic Force Microscope (AFM), while the formation of the interface was examined using Rutherford Back Scattering (RBS) before and after annealing. To verify the depth profiling results, a theoretical simulation model is presented that gave a very good agreement with the experimental results.

  14. Complexity of Economical Systems

    Directory of Open Access Journals (Sweden)

    G. P. Pavlos

    2015-01-01

    Full Text Available In this study new theoretical concepts are described concerning the interpretation of economical complex dynamics. In addition a summary of an extended algorithm of nonlinear time series analysis is provided which is applied not only in economical time series but also in other physical complex systems (e.g. [22, 24]. In general, Economy is a vast and complicated set of arrangements and actions wherein agents—consumers, firms, banks, investors, government agencies—buy and sell, speculate, trade, oversee, bring products into being, offer services, invest in companies, strategize, explore, forecast, compete, learn, innovate, and adapt. As a result the economic and financial variables such as foreign exchange rates, gross domestic product, interest rates, production, stock market prices and unemployment exhibit large-amplitude and aperiodic fluctuations evident in complex systems. Thus, the Economics can be considered as spatially distributed non-equilibrium complex system, for which new theoretical concepts, such as Tsallis non extensive statistical mechanics and strange dynamics, percolation, nonGaussian, multifractal and multiscale dynamics related to fractional Langevin equations can be used for modeling and understanding of the economical complexity locally or globally.

  15. Ionic conductivity and complexation in liquid dielectrics

    International Nuclear Information System (INIS)

    Zhakin, Anatolii I

    2003-01-01

    Electronic and ionic conductivity in nonpolar liquids is reviewed. Theoretical results on ionic complexation (formation of ion pairs and triplets, dipole-dipole chains, ion-dipole clusters) in liquid dielectrics in an intense external electric field are considered, and the relation between the complexation process and ionic conductivity is discussed. Experimental results supporting the possibility of complexation are presented and compared with theoretical calculations. Onsager's theory about the effect of an intense external electric field on ion-pair dissociation is corrected for the finite size of ions. (reviews of topical problems)

  16. Recognition of VLSI Module Isomorphism

    Science.gov (United States)

    1990-03-01

    forthforth->next; 6.5 else{ prev4=prev4->next; forth=forth->next; if (header-. nenI ->tai==third){ header-.nevrI->tail=prev3; prev3->next=NULL; end...end=TRUE; if (header-. nenI ->head=third){ header-.newn->head=third->next; I if((third!=prev3)&&(finished!=TRUE)){ prev3->next=prev3->next->next; third

  17. VLSI Based Multiprocessor Communications Networks.

    Science.gov (United States)

    1982-09-01

    Networks". The contract began on September 1,1980 and was approved on scientific /technical grounds for a duration of three years. Incremental funding was...values for the individual delays will vary from comunicating modules (ij) are shown in Figure 4 module to module due to processing and fabrication

  18. Temperature Dependent Wire Delay Estimation in Floorplanning

    DEFF Research Database (Denmark)

    Winther, Andreas Thor; Liu, Wei; Nannarelli, Alberto

    2011-01-01

    Due to large variations in temperature in VLSI circuits and the linear relationship between metal resistance and temperature, the delay through wires of the same length can be different. Traditional thermal aware floorplanning algorithms use wirelength to estimate delay and routability. In this w......Due to large variations in temperature in VLSI circuits and the linear relationship between metal resistance and temperature, the delay through wires of the same length can be different. Traditional thermal aware floorplanning algorithms use wirelength to estimate delay and routability....... In this work, we show that using wirelength as the evaluation metric does not always produce a floorplan with the shortest delay. We propose a temperature dependent wire delay estimation method for thermal aware floorplanning algorithms, which takes into account the thermal effect on wire delay. The experiment...

  19. System-Level Modeling and Synthesis Techniques for Flow-Based Microfluidic Very Large Scale Integration Biochips

    DEFF Research Database (Denmark)

    Minhass, Wajid Hassan

    Microfluidic biochips integrate different biochemical analysis functionalities on-chip and offer several advantages over the conventional biochemical laboratories. In this thesis, we focus on the flow-based biochips. The basic building block of such a chip is a valve which can be fabricated at very...... propose a framework for mapping the biochemical applications onto the mVLSI biochips, binding and scheduling the operations and performing fluid routing. A control synthesis framework for determining the exact valve activation sequence required to execute the application is also proposed. In order...... to reduce the macro-assembly around the chip and enhance chip scalability, we propose an approach for the biochip pin count minimization. We also propose a throughput maximization scheme for the cell culture mVLSI biochips, saving time and reducing costs. We have extensively evaluated the proposed...

  20. An Approach to Implementing A Threshold Adjusting Mechanism in Very Complex Negotiations : A Preliminary Result

    OpenAIRE

    Fujita, Katsuhide; Ito, Takayuki; Hattori, Hiromitsu; Klein, Mark

    2007-01-01

    In this paper, we propose a threshold adjusting mechanism in very complex negotiations among software agents. The proposed mechanism can facilitate agents to reach an agreement while keeping their private information as much as possible. Multi-issue negotiation protocols have been studied widely and represent a promising field since most negotiation problems in the real world involve interdependent multiple issues. We have proposed negotiation protocols where a bidding-based mechanism is used...

  1. Electrochemical behaviour of alkaline copper complexes

    Indian Academy of Sciences (India)

    Abstract. A search for non-cyanide plating baths for copper resulted in the development of alkaline copper complex baths containing trisodium citrate [TSC] and triethanolamine [TEA]. Voltammetric studies were carried out on platinum to understand the electrochemical behaviour of these complexes. In TSC solutions, the.

  2. Optimization of Wind Farm Layout in Complex Terrain

    DEFF Research Database (Denmark)

    Xu, Chang; Yang, Jianchuan; Li, Chenqi

    2013-01-01

    Microscopic site selection for wind farms in complex terrain is a technological difficulty in the development of onshore wind farms. This paper presented a method for optimizing wind farm layout in complex terrain. This method employed Lissaman and Jensen wake models, took wind velocity distribut......Microscopic site selection for wind farms in complex terrain is a technological difficulty in the development of onshore wind farms. This paper presented a method for optimizing wind farm layout in complex terrain. This method employed Lissaman and Jensen wake models, took wind velocity...... are subject to boundary conditions and minimum distance conditions. The improved genetic algorithm (GA) for real number coding was used to search the optimal result. Then the optimized result was compared to the result from the experienced layout method. Results show the advantages of the present method...

  3. MANAGEMENT OF SPORT COMPLEXES

    Directory of Open Access Journals (Sweden)

    Marian STAN

    2015-07-01

    Full Text Available The actuality of the investigated theme. Nowadays, human evolution, including his intellectual development, proves the fact that especially the creation manpower and the employment was the solution of all life’s ambitions in society. So, the fact is that in reality, man is the most important capital of the society. Also, in an individual’s life, the practice of sport plays a significant role and that’s why the initiation, the launch and the management of sports complexes activity reveal the existence of specific management features that we will identify and explain in the current study. The aim of the research refers to the elaboration of a theoretical base of the management of the sport complexes, to the pointing of the factors that influence the efficient existence and function of a sport complex in our country and to the determination of the responsibilities that have a manager who directs successfully the activity of the sport complexes. The investigation is based on theoretical methods, such as: scientific documentation, analysis, synthesis, comparison and on empirical research methods, like: study of researched literature and observation. The results of the research indicate the fact that the profitability of a sport complex must assure a particular structure to avoid the bankruptcy risk and also, that the administration of the sport complexes activity must keep in view the reliable functions of the contemporaneous management.

  4. Modern structure of marketing communications complex

    Directory of Open Access Journals (Sweden)

    Hrebenyukova Elena

    2015-08-01

    Full Text Available The article presents the results of the desk research, in which the current structure of the marketing communications complex was analyzed. According to the results of the content analysis of scientific and educational literature in marketing it was proved that there is a certain structural asymmetry in today's complex of marketing communication: the rejection of impersonal tools and actualization of those which make possible personalized communication with the consumer.

  5. Interdisciplinary conflict and organizational complexity.

    Science.gov (United States)

    Guy, M E

    1986-01-01

    Most people think that conflict among the professional staff is inevitable and results from each profession's unique set of values. Each profession then defends itself by claiming its own turf. This article demonstrates that organizational complexity, not professional territorialism, influences the amount of intraorganizational conflict. In a comparison of two psychiatric hospitals, this study shows that there is not necessarily greater conflict across professions than within professions. However, there is a significantly greater amount of conflict among staff at a structurally more complex hospital than at a less-complex hospital, regardless of profession. Implications for management are discussed.

  6. Development and simulation results of a sparsification and readout circuit for wide pixel matrices

    International Nuclear Information System (INIS)

    Gabrielli, A.; Giorgi, F.; Morsani, F.; Villa, M.

    2011-01-01

    In future collider experiments, the increasing luminosity and centre of mass energy are rising challenging problems in the design of new inner tracking systems. In this context we develop high-efficiency readout architectures for large binary pixel matrices that are meant to cope with the high-stressing conditions foreseen in the innermost layers of a tracker [The SuperB Conceptual Design Report, INFN/AE-07/02, SLAC-R-856, LAL 07-15, Available online at: (http://www.pi.infn.it/SuperB)]. We model and design digital readout circuits to be integrated on VLSI ASICs. These architectures can be realized with different technology processes and sensors: they can be implemented on the same silicon sensor substrate of a CMOS MAPS devices (Monolithic Active Pixel Sensor), on the CMOS tier of a hybrid pixel sensor or in a 3D chip where the digital layer is stacked on the sensor and the analog layers [V. Re et al., Nuc. Instr. and Meth. in Phys. Res. A, (doi:10.1016/j.nima.2010.05.039)]. In the presented work, we consider a data-push architecture designed for a sensor matrix of an area of about 1.3 cm 2 with a pitch of 50 microns. The readout circuit tries to take great advantage of the high density of in-pixel digital logic allowed by vertical integration. We aim at sustaining a rate density of 100 Mtrack . s -1 . cm -2 with a temporal resolution below 1 μs. We show how this architecture can cope with these stressing conditions presenting the results of Monte Carlo simulations.

  7. Development and simulation results of a sparsification and readout circuit for wide pixel matrices

    Energy Technology Data Exchange (ETDEWEB)

    Gabrielli, A.; Giorgi, F. [University and INFN of Bologna (Italy); Morsani, F. [University and INFN of Pisa (Italy); Villa, M. [University and INFN of Bologna (Italy)

    2011-06-15

    In future collider experiments, the increasing luminosity and centre of mass energy are rising challenging problems in the design of new inner tracking systems. In this context we develop high-efficiency readout architectures for large binary pixel matrices that are meant to cope with the high-stressing conditions foreseen in the innermost layers of a tracker [The SuperB Conceptual Design Report, INFN/AE-07/02, SLAC-R-856, LAL 07-15, Available online at: (http://www.pi.infn.it/SuperB)]. We model and design digital readout circuits to be integrated on VLSI ASICs. These architectures can be realized with different technology processes and sensors: they can be implemented on the same silicon sensor substrate of a CMOS MAPS devices (Monolithic Active Pixel Sensor), on the CMOS tier of a hybrid pixel sensor or in a 3D chip where the digital layer is stacked on the sensor and the analog layers [V. Re et al., Nuc. Instr. and Meth. in Phys. Res. A, (doi:10.1016/j.nima.2010.05.039)]. In the presented work, we consider a data-push architecture designed for a sensor matrix of an area of about 1.3 cm{sup 2} with a pitch of 50 microns. The readout circuit tries to take great advantage of the high density of in-pixel digital logic allowed by vertical integration. We aim at sustaining a rate density of 100 Mtrack . s{sup -1} . cm{sup -2} with a temporal resolution below 1 {mu}s. We show how this architecture can cope with these stressing conditions presenting the results of Monte Carlo simulations.

  8. Absence of Non-histone Protein Complexes at Natural Chromosomal Pause Sites Results in Reduced Replication Pausing in Aging Yeast Cells

    Directory of Open Access Journals (Sweden)

    Marleny Cabral

    2016-11-01

    Full Text Available There is substantial evidence that genomic instability increases during aging. Replication pausing (and stalling at difficult-to-replicate chromosomal sites may induce genomic instability. Interestingly, in aging yeast cells, we observed reduced replication pausing at various natural replication pause sites (RPSs in ribosomal DNA (rDNA and non-rDNA locations (e.g., silent replication origins and tRNA genes. The reduced pausing occurs independent of the DNA helicase Rrm3p, which facilitates replication past these non-histone protein-complex-bound RPSs, and is independent of the deacetylase Sir2p. Conditions of caloric restriction (CR, which extend life span, also cause reduced replication pausing at the 5S rDNA and at tRNA genes. In aged and CR cells, the RPSs are less occupied by their specific non-histone protein complexes (e.g., the preinitiation complex TFIIIC, likely because members of these complexes have primarily cytosolic localization. These conditions may lead to reduced replication pausing and may lower replication stress at these sites during aging.

  9. Determinants of Complexity of Sovereign Debt Negotiation

    Directory of Open Access Journals (Sweden)

    Lidia Mesjasz

    2016-07-01

    Full Text Available The situation on all kinds of financial markets is determined by their increasing complexity. Negotiation of sovereign debt is also a complex endeavor. Its complexity results both from structural characteristics - number of actors, problems of coordination, communication, cooperation and conflict and from cognitive limitations. The survey of literature on sovereign debt management shows that no research has been done on complexity of sovereign debt management, and sovereign debt negotiation in particular. The aim of the paper is to provide initial framework concepts of complexity of sovereign debt restructuring negotiation referring to a universal collection of characteristics of negotiation. A model of debt restructuring negotiation is elaborated and a set of its complexity- related characteristics is proposed.

  10. Symmetrized complex amplitudes for He double photoionization from the time-dependent close coupling and exterior complex scaling methods

    International Nuclear Information System (INIS)

    Horner, D.A.; Colgan, J.; Martin, F.; McCurdy, C.W.; Pindzola, M.S.; Rescigno, T.N.

    2004-01-01

    Symmetrized complex amplitudes for the double photoionization of helium are computed by the time-dependent close-coupling and exterior complex scaling methods, and it is demonstrated that both methods are capable of the direct calculation of these amplitudes. The results are found to be in excellent agreement with each other and in very good agreement with results of other ab initio methods and experiment

  11. Improvement of CMOS VLSI rad tolerance by processing technics

    International Nuclear Information System (INIS)

    Guyomard, D.; Desoutter, I.

    1986-01-01

    The following study concerns the development of integrated circuits for fields requiring only relatively low radiation tolerance levels, and especially for the civil spatial district area. Process modifications constitute our basic study. They have been carried into effects. Our work and main results are reported in this paper. Well known 2.5 and 3 μm CMOS technologies are under our concern. A first set of modifications enables us to double the cumulative dose tolerance of a 4 Kbit SRAM, keeping at the same time the same kind of damage. We obtain memories which tolerate radiation doses as high as 16 KRad(Si). Repetitivity of the results, linked to the quality assurance of this specific circuit, is reported here. A second set of modifications concerns the processing of gate array. In particular, the choice of the silicon substrate type, (epitaxy substrate), is under investigation. On the other hand, a complete study of a test vehicule allows us to accurately measure the rad tolerance of various components of the Cell library [fr

  12. Measurement methods on the complexity of network

    Institute of Scientific and Technical Information of China (English)

    LIN Lin; DING Gang; CHEN Guo-song

    2010-01-01

    Based on the size of network and the number of paths in the network,we proposed a model of topology complexity of a network to measure the topology complexity of the network.Based on the analyses of the effects of the number of the equipment,the types of equipment and the processing time of the node on the complexity of the network with the equipment-constrained,a complexity model of equipment-constrained network was constructed to measure the integrated complexity of the equipment-constrained network.The algorithms for the two models were also developed.An automatic generator of the random single label network was developed to test the models.The results show that the models can correctly evaluate the topology complexity and the integrated complexity of the networks.

  13. ComplexViewer: visualization of curated macromolecular complexes.

    Science.gov (United States)

    Combe, Colin W; Sivade, Marine Dumousseau; Hermjakob, Henning; Heimbach, Joshua; Meldal, Birgit H M; Micklem, Gos; Orchard, Sandra; Rappsilber, Juri

    2017-11-15

    Proteins frequently function as parts of complexes, assemblages of multiple proteins and other biomolecules, yet network visualizations usually only show proteins as parts of binary interactions. ComplexViewer visualizes interactions with more than two participants and thereby avoids the need to first expand these into multiple binary interactions. Furthermore, if binding regions between molecules are known then these can be displayed in the context of the larger complex. freely available under Apache version 2 license; EMBL-EBI Complex Portal: http://www.ebi.ac.uk/complexportal; Source code: https://github.com/MICommunity/ComplexViewer; Package: https://www.npmjs.com/package/complexviewer; http://biojs.io/d/complexviewer. Language: JavaScript; Web technology: Scalable Vector Graphics; Libraries: D3.js. colin.combe@ed.ac.uk or juri.rappsilber@ed.ac.uk. © The Author 2017. Published by Oxford University Press.

  14. Non-equilibrium phase transitions in complex plasma

    International Nuclear Information System (INIS)

    Suetterlin, K R; Raeth, C; Ivlev, A V; Thomas, H M; Khrapak, S; Zhdanov, S; Rubin-Zuzic, M; Morfill, G E; Wysocki, A; Loewen, H; Goedheer, W J; Fortov, V E; Lipaev, A M; Molotkov, V I; Petrov, O F

    2010-01-01

    Complex plasma being the 'plasma state of soft matter' is especially suitable for investigations of non-equilibrium phase transitions. Non-equilibrium phase transitions can manifest in dissipative structures or self-organization. Two specific examples are lane formation and phase separation. Using the permanent microgravity laboratory PK-3 Plus, operating onboard the International Space Station, we performed unique experiments with binary mixtures of complex plasmas that showed both lane formation and phase separation. These observations have been augmented by comprehensive numerical and theoretical studies. In this paper we present an overview of our most important results. In addition we put our results in context with research of complex plasmas, binary systems and non-equilibrium phase transitions. Necessary and promising future complex plasma experiments on phase separation and lane formation are briefly discussed.

  15. Visual detection of defects in solder joints

    Science.gov (United States)

    Blaignan, V. B.; Bourbakis, Nikolaos G.; Moghaddamzadeh, Ali; Yfantis, Evangelos A.

    1995-03-01

    The automatic, real-time visual acquisition and inspection of VLSI boards requires the use of machine vision and artificial intelligence methodologies in a new `frame' for the achievement of better results regarding efficiency, products quality and automated service. In this paper the visual detection and classification of different types of defects on solder joints in PC boards is presented by combining several image processing methods, such as smoothing, segmentation, edge detection, contour extraction and shape analysis. The results of this paper are based on simulated solder defects and a real one.

  16. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  17. Recent results on howard's algorithm

    DEFF Research Database (Denmark)

    Miltersen, P.B.

    2012-01-01

    is generally recognized as fast in practice, until recently, its worst case time complexity was poorly understood. However, a surge of results since 2009 has led us to a much more satisfactory understanding of the worst case time complexity of the algorithm in the various settings in which it applies...

  18. Complexation of carboxylate on smectite surfaces.

    Science.gov (United States)

    Liu, Xiandong; Lu, Xiancai; Zhang, Yingchun; Zhang, Chi; Wang, Rucheng

    2017-07-19

    We report a first principles molecular dynamics (FPMD) study of carboxylate complexation on clay surfaces. By taking acetate as a model carboxylate, we investigate its inner-sphere complexes adsorbed on clay edges (including (010) and (110) surfaces) and in interlayer space. Simulations show that acetate forms stable monodentate complexes on edge surfaces and a bidentate complex with Ca 2+ in the interlayer region. The free energy calculations indicate that the complexation on edge surfaces is slightly more stable than in interlayer space. By integrating pK a s and desorption free energies of Al coordinated water calculated previously (X. Liu, X. Lu, E. J. Meijer, R. Wang and H. Zhou, Geochim. Cosmochim. Acta, 2012, 81, 56-68; X. Liu, J. Cheng, M. Sprik, X. Lu and R. Wang, Geochim. Cosmochim. Acta, 2014, 140, 410-417), the pH dependence of acetate complexation has been revealed. It shows that acetate forms inner-sphere complexes on (110) in a very limited mildly acidic pH range while it can complex on (010) in the whole common pH range. The results presented in this study form a physical basis for understanding the geochemical processes involving clay-organics interactions.

  19. Symbolic Dynamics and Grammatical Complexity

    Science.gov (United States)

    Hao, Bai-Lin; Zheng, Wei-Mou

    The following sections are included: * Formal Languages and Their Complexity * Formal Language * Chomsky Hierarchy of Grammatical Complexity * The L-System * Regular Language and Finite Automaton * Finite Automaton * Regular Language * Stefan Matrix as Transfer Function for Automaton * Beyond Regular Languages * Feigenbaum and Generalized Feigenbaum Limiting Sets * Even and Odd Fibonacci Sequences * Odd Maximal Primitive Prefixes and Kneading Map * Even Maximal Primitive Prefixes and Distinct Excluded Blocks * Summary of Results

  20. A study on the identification of cognitive complexity factors related to the complexity of procedural steps

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jin Kyun; Jeong, Kwang Sup; Jung, Won Dea [KAERI, Taejon (Korea, Republic of)

    2004-07-01

    In complex systems, it is well recognized that the provision of understandable procedures that allow operators to clarify 'what needs to be done' and 'how to do it' is one of the requisites to confirm their safety. In this regard, the step complexity (SC) measure that can quantify the complexity of procedural steps in emergency operating procedures (EOPs) of a nuclear power plant (NPP) was suggested. However, the necessity of additional complexity factors that can consider a cognitive aspect in evaluating the complexity of procedural steps is evinced from the comparisons between SC scores and operators' performance data. To this end, the comparisons between operators' performance data with their behavior in conducting prescribed activities of procedural steps are conducted in this study. As a result, two kinds of complexity factors (the abstraction level of knowledge and the level of engineering decision) that could affect operators' cognitive burden are identified. Although a well-designed experiment is indispensable in confirming the appropriateness of cognitive complexity factors, it is strongly believed that the change of an operator's performance can be more authentically explained if they are taken into consideration.

  1. A study on the identification of cognitive complexity factors related to the complexity of procedural steps

    International Nuclear Information System (INIS)

    Park, Jin Kyun; Jeong, Kwang Sup; Jung, Won Dea

    2004-01-01

    In complex systems, it is well recognized that the provision of understandable procedures that allow operators to clarify 'what needs to be done' and 'how to do it' is one of the requisites to confirm their safety. In this regard, the step complexity (SC) measure that can quantify the complexity of procedural steps in emergency operating procedures (EOPs) of a nuclear power plant (NPP) was suggested. However, the necessity of additional complexity factors that can consider a cognitive aspect in evaluating the complexity of procedural steps is evinced from the comparisons between SC scores and operators' performance data. To this end, the comparisons between operators' performance data with their behavior in conducting prescribed activities of procedural steps are conducted in this study. As a result, two kinds of complexity factors (the abstraction level of knowledge and the level of engineering decision) that could affect operators' cognitive burden are identified. Although a well-designed experiment is indispensable in confirming the appropriateness of cognitive complexity factors, it is strongly believed that the change of an operator's performance can be more authentically explained if they are taken into consideration

  2. Functional complexity and ecosystem stability: an experimental approach

    Energy Technology Data Exchange (ETDEWEB)

    Van Voris, P.; O' Neill, R.V.; Shugart, H.H.; Emanuel, W.R.

    1978-01-01

    The complexity-stability hypothesis was experimentally tested using intact terrestrial microcosms. Functional complexity was defined as the number and significance of component interactions (i.e., population interactions, physical-chemical reactions, biological turnover rates) influenced by nonlinearities, feedbacks, and time delays. It was postulated that functional complexity could be nondestructively measured through analysis of a signal generated from the system. Power spectral analysis of hourly CO/sub 2/ efflux, from eleven old-field microcosms, was analyzed for the number of low frequency peaks and used to rank the functional complexity of each system. Ranking of ecosystem stability was based on the capacity of the system to retain essential nutrients and was measured by net loss of Ca after the system was stressed. Rank correlation supported the hypothesis that increasing ecosystem functional complexity leads to increasing ecosystem stability. The results indicated that complex functional dynamics can serve to stabilize the system. The results also demonstrated that microcosms are useful tools for system-level investigations.

  3. Complex formation of p-carboxybenzeneboronic acid with fructose

    International Nuclear Information System (INIS)

    Bulbul Islam, T.M.; Yoshino, K.

    2000-01-01

    To increase the solubility of p-caboxybenzeneboronic acid (PCBA) in physiological pH 7.4, the complex formation of PCBA with fructose has been studied by 11 B-NMR. PCBA formed complex with fructose and the complex increased the solubility of PCBA. The complex formation constant (log K) was obtained in pH 7.4 as 2.75 from the 11 B-NMR spectra. Based on this result the complex formation ability of PCBA with fructose has been discussed. (author)

  4. Kinetics of the reactions of hydrated electrons with metal complexes

    International Nuclear Information System (INIS)

    Korsse, J.

    1983-01-01

    The reactivity of the hydrated electron towards metal complexes is considered. Experiments are described involving metal EDTA and similar complexes. The metal ions studied are mainly Ni 2+ , Co 2+ and Cu 2+ . Rates of the reactions of the complexes with e - (aq) were measured using the pulse radiolysis technique. It is shown that the reactions of e - (aq) with the copper complexes display unusually small kinetic salt effects. The results suggest long-range electron transfer by tunneling. A tunneling model is presented and the experimental results are discussed in terms of this model. Results of approximate molecular orbital calculations of some redox potentials are given, for EDTA chelates as well as for series of hexacyano and hexaquo complexes. Finally, equilibrium constants for the formation of ternary complexes are reported. (Auth./G.J.P.)

  5. Synchronization in node of complex networks consist of complex chaotic system

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Qiang, E-mail: qiangweibeihua@163.com [Beihua University computer and technology College, BeiHua University, Jilin, 132021, Jilin (China); Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin (China); Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024 (China); Xie, Cheng-jun [Beihua University computer and technology College, BeiHua University, Jilin, 132021, Jilin (China); Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin (China); Liu, Hong-jun [School of Information Engineering, Weifang Vocational College, Weifang, 261041 (China); Li, Yan-hui [The Library, Weifang Vocational College, Weifang, 261041 (China)

    2014-07-15

    A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.

  6. Exotic plant species around Jeongeup Research Complex and RFT industrial complex

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Cha, Min Kyoung; Ryu, Tae Ho; Lee, Yun Jong; Kim, Jin Hong

    2015-01-01

    In Shinjeong-dong of Jeongeup, there are three government-supported research institutes and an RFT industrial complex which is currently being established. Increased human activities can affect flora and fauna as a man-made pressure onto the region. As a baseline study, status of exotic plants was investigated prior to a full operation of the RFT industrial complex. A total of 54 species and 1 variety of naturalized or introduced plants were found in the study area. Among them, three species (Ambrosia artemisifolia var. elatior, Rumex acetocella and Aster pilosus) belong to 'nuisance species', and four species (Phytolacca americana, Iopomoea hederacea, Ereechtites hieracifolia and Rudbeckia laciniata) to ‘monitor species’ designated by the ministry of Environment. Some of naturalized trees and plants were intentionally introduced in this area, while others naturally immigrated. Physalis angulata seems to immigrate in the study area in the form of mixture with animal feeds as its distribution coincided with the transportation route of the animal feeds. Liquidambar styraciflua is amenable to the ecological investigation on the possible expansion of the species to the nearby Naejang National Park as its leave shape and autumn color are very similar to those of maple trees. The number of naturalized plants around the RFT industrial complex will increase with an increase in floating population, in human activities in association with constructions of factories and operations of the complex. The result of this study provides baseline data for assessing the ecological change of the region according to the operation of the RFT industrial complex

  7. Exotic plant species around Jeongeup Research Complex and RFT industrial complex

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Kyu; Cha, Min Kyoung; Ryu, Tae Ho; Lee, Yun Jong; Kim, Jin Hong [Advanced Radiation Technology Institute, Korea Atomic Energy Research Institute, Jeongeup(Korea, Republic of)

    2015-08-15

    In Shinjeong-dong of Jeongeup, there are three government-supported research institutes and an RFT industrial complex which is currently being established. Increased human activities can affect flora and fauna as a man-made pressure onto the region. As a baseline study, status of exotic plants was investigated prior to a full operation of the RFT industrial complex. A total of 54 species and 1 variety of naturalized or introduced plants were found in the study area. Among them, three species (Ambrosia artemisifolia var. elatior, Rumex acetocella and Aster pilosus) belong to 'nuisance species', and four species (Phytolacca americana, Iopomoea hederacea, Ereechtites hieracifolia and Rudbeckia laciniata) to ‘monitor species’ designated by the ministry of Environment. Some of naturalized trees and plants were intentionally introduced in this area, while others naturally immigrated. Physalis angulata seems to immigrate in the study area in the form of mixture with animal feeds as its distribution coincided with the transportation route of the animal feeds. Liquidambar styraciflua is amenable to the ecological investigation on the possible expansion of the species to the nearby Naejang National Park as its leave shape and autumn color are very similar to those of maple trees. The number of naturalized plants around the RFT industrial complex will increase with an increase in floating population, in human activities in association with constructions of factories and operations of the complex. The result of this study provides baseline data for assessing the ecological change of the region according to the operation of the RFT industrial complex.

  8. Acute stress influences the discrimination of complex scenes and complex faces in young healthy men.

    Science.gov (United States)

    Paul, M; Lech, R K; Scheil, J; Dierolf, A M; Suchan, B; Wolf, O T

    2016-04-01

    The stress-induced release of glucocorticoids has been demonstrated to influence hippocampal functions via the modulation of specific receptors. At the behavioral level stress is known to influence hippocampus dependent long-term memory. In recent years, studies have consistently associated the hippocampus with the non-mnemonic perception of scenes, while adjacent regions in the medial temporal lobe were associated with the perception of objects, and faces. So far it is not known whether and how stress influences non-mnemonic perceptual processes. In a behavioral study, fifty male participants were subjected either to the stressful socially evaluated cold-pressor test or to a non-stressful control procedure, before they completed a visual discrimination task, comprising scenes and faces. The complexity of the face and scene stimuli was manipulated in easy and difficult conditions. A significant three way interaction between stress, stimulus type and complexity was found. Stressed participants tended to commit more errors in the complex scenes condition. For complex faces a descriptive tendency in the opposite direction (fewer errors under stress) was observed. As a result the difference between the number of errors for scenes and errors for faces was significantly larger in the stress group. These results indicate that, beyond the effects of stress on long-term memory, stress influences the discrimination of spatial information, especially when the perception is characterized by a high complexity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Complex partial seizure, disruptive behaviours and the Nigerian ...

    African Journals Online (AJOL)

    Background: Complex partial seizure is an epileptic seizure which results in impairment of responsiveness or awareness such as altered level of consciousness. Complex partial seizures are often preceded by an aura such as depersonalization, feelings of de javu, jamais vu and fear. The ictal phase of complex partial ...

  10. Synthesis and structure determination of a stable organometallic uranium(V) imine complex and its isolobal anionic U(IV)-ate complex

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, M.; Botoshanskii, M.; Eisen, M.S. [Schulich Faculty of Chemistry, and Institute of Catalysis Science and Technology, Technion Israel Institute of Technology, Haifa (Israel); Bannenberg, Th.; Tamm, M. [Institut fur Anorganische und Analytische Chemie, Technische Universitat Braunschweig (Germany)

    2010-06-15

    The reaction of one equivalent of Cp*{sub 2}UCl{sub 2} with 2-(trimethylsilyl-imino)-1,3-di-tert-butyl-imidazoline in boiling toluene afforded a one electron oxidation of the uranium metal and the opening of the N-heterocyclic ring, resulting in the formation of an organometallic uranium(V) imine complex. This complex crystallized with one molecule of toluene in the unit cell, and its solid-state structure was determined by X-ray diffraction analysis. When the same reaction was performed in perdeuterated toluene, a myriad of organometallic complexes were obtained, however, when equimolar amounts of water were used in toluene, the same complex was obtained, and its solid state characterization shows two independent molecules in the unit cell with an additional water molecule. For comparison of the geometric parameters, the corresponding isolobal anionic uranium(IV) complex [Cp*{sub 2}UCl{sub 3}]{sup -} was synthesized by the reaction of Cp*{sub 2}UCl{sub 2} with 1,3-di-tert-butyl-imidazolium chloride, and the resulting U(IV)-ate complex was characterized by X-ray diffraction analysis. (authors)

  11. Managing Complexity

    Energy Technology Data Exchange (ETDEWEB)

    Chassin, David P.; Posse, Christian; Malard, Joel M.

    2004-08-01

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today’s most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically-based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This paper explores the state of the art in the use physical analogs for understanding the behavior of some econophysical systems and to deriving stable and robust control strategies for them. In particular we review and discussion applications of some analytic methods based on the thermodynamic metaphor according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood.

  12. Identification and sensitivity analysis of a correlated ground rule system (design arc)

    Science.gov (United States)

    Eastman, Eric; Chidambarrao, Dureseti; Rausch, Werner; Topaloglu, Rasit O.; Shao, Dongbing; Ramachandran, Ravikumar; Angyal, Matthew

    2017-04-01

    We demonstrate a tool which can function as an interface between VLSI designers and process-technology engineers throughout the Design-Technology Co-optimization (DTCO) process. This tool uses a Monte Carlo algorithm on the output of lithography simulations to model the frequency of fail mechanisms on wafer. Fail mechanisms are defined according to process integration flow: by Boolean operations and measurements between original and derived shapes. Another feature of this design rule optimization methodology is the use of a Markov-Chain-based algorithm to perform a sensitivity analysis, the output of which may be used by process engineers to target key process-induced variabilities for improvement. This tool is used to analyze multiple Middle-Of-Line fail mechanisms in a 10nm inverter design and identify key process assumptions that will most strongly affect the yield of the structures. This tool and the underlying algorithm are also shown to be scalable to arbitrarily complex geometries in three dimensions. Such a characteristic which is becoming more important with the introduction of novel patterning technologies and more complex 3-D on-wafer structures.

  13. Third International Conference on Complex Systems

    CERN Document Server

    Minai, Ali A; Unifying Themes in Complex Systems

    2006-01-01

    In recent years, scientists have applied the principles of complex systems science to increasingly diverse fields. The results have been nothing short of remarkable: their novel approaches have provided answers to long-standing questions in biology, ecology, physics, engineering, computer science, economics, psychology and sociology. The Third International Conference on Complex Systems attracted over 400 researchers from around the world. The conference aimed to encourage cross-fertilization between the many disciplines represented and to deepen our understanding of the properties common to all complex systems. This volume contains over 35 papers selected from those presented at the conference on topics including: self-organization in biology, ecological systems, language, economic modeling, ecological systems, artificial life, robotics, and complexity and art. ALI MINAI is an Affiliate of the New England Complex Systems Institute and an Associate Professor in the Department of Electrical and Computer Engine...

  14. Effective Complexity of Stationary Process Realizations

    Directory of Open Access Journals (Sweden)

    Arleta Szkoła

    2011-06-01

    Full Text Available The concept of effective complexity of an object as the minimal description length of its regularities has been initiated by Gell-Mann and Lloyd. The regularities are modeled by means of ensembles, which is the probability distributions on finite binary strings. In our previous paper [1] we propose a definition of effective complexity in precise terms of algorithmic information theory. Here we investigate the effective complexity of binary strings generated by stationary, in general not computable, processes. We show that under not too strong conditions long typical process realizations are effectively simple. Our results become most transparent in the context of coarse effective complexity which is a modification of the original notion of effective complexity that needs less parameters in its definition. A similar modification of the related concept of sophistication has been suggested by Antunes and Fortnow.

  15. Complexity for Artificial Substrates (

    NARCIS (Netherlands)

    Loke, L.H.L.; Jachowski, N.R.; Bouma, T.J.; Ladle, R.J.; Todd, P.A.

    2014-01-01

    Physical habitat complexity regulates the structure and function of biological communities, although the mechanisms underlying this relationship remain unclear. Urbanisation, pollution, unsustainable resource exploitation and climate change have resulted in the widespread simplification (and loss)

  16. The Microchip Optera Project

    National Research Council Canada - National Science Library

    Moss, Cynthia; Horiuchi, Timothy K

    2006-01-01

    .... The long-term goal of this project is to build a tiny, low-power, neuromorphic VLSI-based model of an FM bat echolocation system that can be demonstrated in an aerial target capture task using a flying vehicle...

  17. Complexation of the An(IV) by NTA; Complexation des An(IV) par le NTA

    Energy Technology Data Exchange (ETDEWEB)

    Bonin, L. [Paris-11 Univ., 91 - Orsay (France)]|[CEA Valrho, Lab. de Chimie des Actinides (LCA), 30 - Marcoule (France)

    2006-07-01

    In the framework of the Nuclear and Environmental Toxicology program, developed in France, it has been decided to take again the studies concerning the actinides decorporation. A similar study of the neptunium complexation by the citrate ions has been carried out on the complexation of Np(IV) with the nitrilotriacetic acid (NTA). The NTA can be considered as a model molecule of the de-corporating molecules (amino-carboxy- ligand). The results of the spectrophotometric measurements being encouraging, the behaviour of several actinides at the same oxidation state (+IV) (Th(IV), U(IV), Np(IV), and Pu(IV)) has been determined. The experimental results are presented. In order to determine the structure of the complexes of stoichiometry 1:2 An(IV)-(NTA){sub 2} in solution, quantic chemistry calculations and EXAFS measurements have been carried out in parallel. These studies confirm the presence of An(IV)-nitrogen bonds whose length decreases from thorium to plutonium and indicate the presence of a water molecule bound to the thorium and the uranium (coordination number 8 for Np/Pu, 9 for Th/U). The evolution of the complexation constants determined in this study in terms of 1/r (r ionic radius of the cation taking into account its coordination number 8 or 9) confirms the change of the coordination number between Th/U and Np/Pu. (O.M.)

  18. Deletion of flbA results in increased secretome complexity and reduced secretion heterogeneity in colonies of Aspergillus niger.

    Science.gov (United States)

    Krijgsheld, Pauline; Nitsche, Benjamin M; Post, Harm; Levin, Ana M; Müller, Wally H; Heck, Albert J R; Ram, Arthur F J; Altelaar, A F Maarten; Wösten, Han A B

    2013-04-05

    Aspergillus niger is a cell factory for the production of enzymes. This fungus secretes proteins in the central part and at the periphery of the colony. The sporulating zone of the colony overlapped with the nonsecreting subperipheral zone, indicating that sporulation inhibits protein secretion. Indeed, strain ΔflbA that is affected early in the sporulation program secreted proteins throughout the colony. In contrast, the ΔbrlA strain that initiates but not completes sporulation did not show altered spatial secretion. The secretome of 5 concentric zones of xylose-grown ΔflbA colonies was assessed by quantitative proteomics. In total 138 proteins with a signal sequence for secretion were identified in the medium of ΔflbA colonies. Of these, 18 proteins had never been reported to be part of the secretome of A. niger, while 101 proteins had previously not been identified in the culture medium of xylose-grown wild type colonies. Taken together, inactivation of flbA results in spatial changes in secretion and in a more complex secretome. The latter may be explained by the fact that strain ΔflbA has a thinner cell wall compared to the wild type, enabling efficient release of proteins. These results are of interest to improve A. niger as a cell factory.

  19. Resolution of Hanford tanks organic complexant safety issue

    International Nuclear Information System (INIS)

    Kirch, N.W.

    1998-01-01

    The Hanford Site tanks have been assessed for organic complexant reaction hazards. The results have shown that most tanks contain insufficient concentrations of TOC to support a propagating reaction. It has also been shown that those tanks where the TOC concentration approaches levels of concern, degradation of the organic complexants to less energetic compounds has occurred. The results of the investigations have been documented. The residual organic complexants in the Hanford Site waste tanks do not present a safety concern for long-term storage

  20. Quantum chemical investigation of levofloxacin-boron complexes: A computational approach

    Science.gov (United States)

    Sayin, Koray; Karakaş, Duran

    2018-04-01

    Quantum chemical calculations are performed over some boron complexes with levofloxacin. Boron complex with fluorine atoms are optimized at three different methods (HF, B3LYP and M062X) with 6-31 + G(d) basis set. The best level is determined as M062X/6-31 + G(d) by comparison of experimental and calculated results of complex (1). The other complexes are optimized by using the best level. Structural properties, IR and NMR spectrum are examined in detail. Biological activities of mentioned complexes are investigated by some quantum chemical descriptors and molecular docking analyses. As a result, biological activities of complex (2) and (4) are close to each other and higher than those of other complexes. Additionally, NLO properties of mentioned complexes are investigated by some quantum chemical parameters. It is found that complex (3) is the best candidate for NLO applications.

  1. Generalized Combination Complex Synchronization for Fractional-Order Chaotic Complex Systems

    Directory of Open Access Journals (Sweden)

    Cuimei Jiang

    2015-07-01

    Full Text Available Based on two fractional-order chaotic complex drive systems and one fractional-order chaotic complex response system with different dimensions, we propose generalized combination complex synchronization. In this new synchronization scheme, there are two complex scaling matrices that are non-square matrices. On the basis of the stability theory of fractional-order linear systems, we design a general controller via active control. Additionally, by virtue of two complex scaling matrices, generalized combination complex synchronization between fractional-order chaotic complex systems and real systems is investigated. Finally, three typical examples are given to demonstrate the effectiveness and feasibility of the schemes.

  2. Adaptive Beamforming Based on Complex Quaternion Processes

    Directory of Open Access Journals (Sweden)

    Jian-wu Tao

    2014-01-01

    Full Text Available Motivated by the benefits of array signal processing in quaternion domain, we investigate the problem of adaptive beamforming based on complex quaternion processes in this paper. First, a complex quaternion least-mean squares (CQLMS algorithm is proposed and its performance is analyzed. The CQLMS algorithm is suitable for adaptive beamforming of vector-sensor array. The weight vector update of CQLMS algorithm is derived based on the complex gradient, leading to lower computational complexity. Because the complex quaternion can exhibit the orthogonal structure of an electromagnetic vector-sensor in a natural way, a complex quaternion model in time domain is provided for a 3-component vector-sensor array. And the normalized adaptive beamformer using CQLMS is presented. Finally, simulation results are given to validate the performance of the proposed adaptive beamformer.

  3. Complexity for survival of livings

    Energy Technology Data Exchange (ETDEWEB)

    Zak, Michail [Jet Propulsion Laboratory, California Institute of Technology, Advance Computing Algorithms and IVHM Group, Pasadena, CA 91109 (United States)]. E-mail: Michail.Zak@jpl.nasa.gov

    2007-05-15

    A connection between survivability of livings and complexity of their behavior is established. New physical paradigms-exchange of information via reflections, and chain of abstractions-explaining and describing progressive evolution of complexity in living (active) systems are introduced. A biological origin of these paradigms is associated with a recently discovered mirror neuron that is able to learn by imitation. As a result, an active element possesses the self-nonself images and interacts with them creating the world of mental dynamics. Three fundamental types of complexity of mental dynamics that contribute to survivability are identified. Mathematical model of the corresponding active systems is described by coupled motor-mental dynamics represented by Langevin and Fokker-Planck equations, respectively, while the progressive evolution of complexity is provided by nonlinear evolution of probability density. Application of the proposed formalism to modeling common-sense-based decision-making process is discussed.

  4. Complexity for survival of livings

    International Nuclear Information System (INIS)

    Zak, Michail

    2007-01-01

    A connection between survivability of livings and complexity of their behavior is established. New physical paradigms-exchange of information via reflections, and chain of abstractions-explaining and describing progressive evolution of complexity in living (active) systems are introduced. A biological origin of these paradigms is associated with a recently discovered mirror neuron that is able to learn by imitation. As a result, an active element possesses the self-nonself images and interacts with them creating the world of mental dynamics. Three fundamental types of complexity of mental dynamics that contribute to survivability are identified. Mathematical model of the corresponding active systems is described by coupled motor-mental dynamics represented by Langevin and Fokker-Planck equations, respectively, while the progressive evolution of complexity is provided by nonlinear evolution of probability density. Application of the proposed formalism to modeling common-sense-based decision-making process is discussed

  5. Linear complexity for multidimensional arrays - a numerical invariant

    DEFF Research Database (Denmark)

    Gomez-Perez, Domingo; Høholdt, Tom; Moreno, Oscar

    2015-01-01

    Linear complexity is a measure of how complex a one dimensional sequence can be. In this paper we extend the concept of linear complexity to multiple dimensions and present a definition that is invariant under well-orderings of the arrays. As a result we find that our new definition for the proce...

  6. Mobility of organic complexes of radionuclides in soils

    International Nuclear Information System (INIS)

    Swanson, J.L.

    1983-01-01

    Results are presented to illustrate the importance of another important aspect of kinetically-inert complexes of Ni and Co to radionuclide migration; such complexes can be sorbed by some soils, while only the uncomplexed species are sorbed by others. As shown earlier, when only uncomplexed species are sorbed the kinetic inertness of the complexes can prevent significant sorption of the radionuclides by soil. Other data provide added evidence that the importance of kinetically-inert complexes varies greatly among complexants, as well as among soils. 6 references, 8 figures

  7. Complexity explained

    CERN Document Server

    Erdi, Peter

    2008-01-01

    This book explains why complex systems research is important in understanding the structure, function and dynamics of complex natural and social phenomena. Readers will learn the basic concepts and methods of complex system research.

  8. Computational error and complexity in science and engineering computational error and complexity

    CERN Document Server

    Lakshmikantham, Vangipuram; Chui, Charles K; Chui, Charles K

    2005-01-01

    The book "Computational Error and Complexity in Science and Engineering” pervades all the science and engineering disciplines where computation occurs. Scientific and engineering computation happens to be the interface between the mathematical model/problem and the real world application. One needs to obtain good quality numerical values for any real-world implementation. Just mathematical quantities symbols are of no use to engineers/technologists. Computational complexity of the numerical method to solve the mathematical model, also computed along with the solution, on the other hand, will tell us how much computation/computational effort has been spent to achieve that quality of result. Anyone who wants the specified physical problem to be solved has every right to know the quality of the solution as well as the resources spent for the solution. The computed error as well as the complexity provide the scientific convincing answer to these questions. Specifically some of the disciplines in which the book w...

  9. Detection of expression quantitative trait Loci in complex mouse crosses: impact and alleviation of data quality and complex population substructure.

    Science.gov (United States)

    Iancu, Ovidiu D; Darakjian, Priscila; Kawane, Sunita; Bottomly, Daniel; Hitzemann, Robert; McWeeney, Shannon

    2012-01-01

    Complex Mus musculus crosses, e.g., heterogeneous stock (HS), provide increased resolution for quantitative trait loci detection. However, increased genetic complexity challenges detection methods, with discordant results due to low data quality or complex genetic architecture. We quantified the impact of theses factors across three mouse crosses and two different detection methods, identifying procedures that greatly improve detection quality. Importantly, HS populations have complex genetic architectures not fully captured by the whole genome kinship matrix, calling for incorporating chromosome specific relatedness information. We analyze three increasingly complex crosses, using gene expression levels as quantitative traits. The three crosses were an F(2) intercross, a HS formed by crossing four inbred strains (HS4), and a HS (HS-CC) derived from the eight lines found in the collaborative cross. Brain (striatum) gene expression and genotype data were obtained using the Illumina platform. We found large disparities between methods, with concordance varying as genetic complexity increased; this problem was more acute for probes with distant regulatory elements (trans). A suite of data filtering steps resulted in substantial increases in reproducibility. Genetic relatedness between samples generated overabundance of detected eQTLs; an adjustment procedure that includes the kinship matrix attenuates this problem. However, we find that relatedness between individuals is not evenly distributed across the genome; information from distinct chromosomes results in relatedness structure different from the whole genome kinship matrix. Shared polymorphisms from distinct chromosomes collectively affect expression levels, confounding eQTL detection. We suggest that considering chromosome specific relatedness can result in improved eQTL detection.

  10. FPGA based, modular, configurable controller with fast synchronous optical network

    Energy Technology Data Exchange (ETDEWEB)

    Graczyk, R.; Pozniak, K.T.; Romaniuk, R.S. [Warsaw Univ. of Technology (Poland). Inst. of Electronic Systems

    2006-07-01

    The paper describes a configurable controller equipped with programmable VLSI FPGA circuit, universal expansion modules PMC, synchronous, optical, multi-gigabit links, commonly used industrial and computer communication interfaces, Ethernet 100TB, system of automatic initialization ACE etc. There are characterized the basic functional characteristics of the device. The possibilities of its usage in various work modes were presented. Realization of particular blocks of the device were discussed. Resulting, during the realization of this project, new hardware layer solutions were also characterized. (orig.)

  11. FPGA based, modular, configurable controller with fast synchronous optical network

    International Nuclear Information System (INIS)

    Graczyk, R.; Pozniak, K.T.; Romaniuk, R.S.

    2006-01-01

    The paper describes a configurable controller equipped with programmable VLSI FPGA circuit, universal expansion modules PMC, synchronous, optical, multi-gigabit links, commonly used industrial and computer communication interfaces, Ethernet 100TB, system of automatic initialization ACE etc. There are characterized the basic functional characteristics of the device. The possibilities of its usage in various work modes were presented. Realization of particular blocks of the device were discussed. Resulting, during the realization of this project, new hardware layer solutions were also characterized. (orig.)

  12. Quantum Computing

    Indian Academy of Sciences (India)

    performance driven optimization ofVLSI ... start-up company at lIT. Mumbai. ... 1 The best known algorithms for factorization ... make a measurement the quantum state continues to be ... cally in this way: if there is a source producing identical.

  13. Analogue and Mixed-Signal Integrated Circuits for Space Applications

    CERN Document Server

    2014-01-01

    The purpose of AMICSA 2014 (organised in collaboration of ESA and CERN) is to provide an international forum for the presentation and discussion of recent advances in analogue and mixed-signal VLSI design techniques and technologies for space applications.

  14. Preliminary results from an integrated, multi-parameter, experiment at the Santiaguito lava dome complex, Guatemala

    Science.gov (United States)

    De Angelis, S.; Rietbrock, A.; Lavallée, Y.; Lamb, O. D.; Lamur, A.; Kendrick, J. E.; Hornby, A. J.; von Aulock, F. W.; Chigna, G.

    2016-12-01

    Understanding the complex processes that drive volcanic unrest is crucial to effective risk mitigation. Characterization of these processes, and the mechanisms of volcanic eruptions, is only possible when high-resolution geophysical and geological observations are available over comparatively long periods of time. In November 2014, the Liverpool Earth Observatory, UK, in collaboration with the Instituto Nacional de Sismologia, Meteorologia e Hidrologia (INSIVUMEH), Guatemala, established a multi-parameter geophysical network at Santiaguito, one of the most active volcanoes in Guatemala. Activity at Santiaguito throughout the past decade, until the summer of 2015, was characterized by nearly continuous lava dome extrusion accompanied by frequent and regular small-to-moderate gas or gas-and-ash explosions. Over the past two years our network collected a wealth of seismic, acoustic and deformation data, complemented by campaign visual and thermal infrared measurements, and rock and ash samples. Here we present preliminary results from the analysis of this unique dataset. Using acoustic and thermal data collected during 2014-2015 we were able to assess volume fractions of ash and gas in the eruptive plumes. The small proportion of ash inferred in the plumes confirms estimates from previous, independent, studies, and suggests that these events did not involve significant magma fragmentation in the conduit. The results also agree with the suggestion that sacrificial fragmentation along fault zones in the conduit region, due to shear-induced thermal vesiculation, may be at the origin of such events. Finally, starting in the summer of 2015, our experiment captured the transition to a new phase of activity characterized by vigorous vulcanian-style explosions producing large, ash-rich, plumes and frequent hazardous pyroclastic flows, as well as the formation a large summit crater. We present evidence of this transition in the geophysical and geological data, and discuss its

  15. Complex terrain and wind lidars

    Energy Technology Data Exchange (ETDEWEB)

    Bingoel, F.

    2009-08-15

    This thesis includes the results of a PhD study about complex terrain and wind lidars. The study mostly focuses on hilly and forested areas. Lidars have been used in combination with cups, sonics and vanes, to reach the desired vertical measurement heights. Several experiments are performed in complex terrain sites and the measurements are compared with two different flow models; a linearised flow model LINCOM and specialised forest model SCADIS. In respect to the lidar performance in complex terrain, the results showed that horizontal wind speed errors measured by a conically scanning lidar can be of the order of 3-4% in moderately-complex terrain and up to 10% in complex terrain. The findings were based on experiments involving collocated lidars and meteorological masts, together with flow calculations over the same terrains. The lidar performance was also simulated with the commercial software WAsP Engineering 2.0 and was well predicted except for some sectors where the terrain is particularly steep. Subsequently, two experiments were performed in forested areas; where the measurements are recorded at a location deep-in forest and at the forest edge. Both sites were modelled with flow models and the comparison of the measurement data with the flow model outputs showed that the mean wind speed calculated by LINCOM model was only reliable between 1 and 2 tree height (h) above canopy. The SCADIS model reported better correlation with the measurements in forest up to approx6h. At the forest edge, LINCOM model was used by allocating a slope half-in half out of the forest based on the suggestions of previous studies. The optimum slope angle was reported as 17 deg.. Thus, a suggestion was made to use WAsP Engineering 2.0 for forest edge modelling with known limitations and the applied method. The SCADIS model worked better than the LINCOM model at the forest edge but the model reported closer results to the measurements at upwind than the downwind and this should be

  16. Group 4 Metalloporphyrin diolato Complexes and Catalytic Application of Metalloporphyrins and Related Transition Metal Complexes

    Energy Technology Data Exchange (ETDEWEB)

    Du, Guodong [Iowa State Univ., Ames, IA (United States)

    2003-01-01

    In this work, the first examples of group 4 metalloporphyrin 1,2-diolato complexes were synthesized through a number of strategies. In general, treatment of imido metalloporphyrin complexes, (TTP)M=NR, (M = Ti, Zr, Hf), with vicinal diols led to the formation of a series of diolato complexes. Alternatively, the chelating pinacolate complexes could be prepared by metathesis of (TTP)MCl2 (M = Ti, Hf) with disodium pinacolate. These complexes were found to undergo C-C cleavage reactions to produce organic carbonyl compounds. For titanium porphyrins, treatment of a titanium(II) alkyne adduct, (TTP)Ti(η2-PhC≡CPh), with aromatic aldehydes or aryl ketones resulted in reductive coupling of the carbonyl groups to produce the corresponding diolato complexes. Aliphatic aldehydes or ketones were not reactive towards (TTP)Ti(η2-PhC≡CPh). However, these carbonyl compounds could be incorporated into a diolato complex on reaction with a reactive precursor, (TTP)Ti[O(Ph)2C(Ph)2O] to provide unsymmetrical diolato complexes via cross coupling reactions. In addition, an enediolato complex (TTP)Ti(OCPhCPhO) was obtained from the reaction of (TTP)Ti(η2-PhC≡CPh) with benzoin. Titanium porphyrin diolato complexes were found to be intermediates in the (TTP)Ti=O-catalyzed cleavage reactions of vicinal diols, in which atmospheric oxygen was the oxidant. Furthermore, (TTP)Ti=O was capable of catalyzing the oxidation of benzyl alcohol and α-hydroxy ketones to benzaldehyde and α-diketones, respectively. Other high valent metalloporphyrin complexes also can catalyze the oxidative diol cleavage and the benzyl alcohol oxidation reactions with dioxygen. A comparison of Ti(IV) and Sn(IV) porphyrin chemistry was undertaken. While chelated diolato complexes were invariably obtained for titanium porphyrins on treatment with 1,2-diols, the reaction of vicinal diols with tin porphyrins gave a number of products, including mono

  17. Decentralized control of complex systems

    CERN Document Server

    Siljak, Dragoslav D

    2011-01-01

    Complex systems require fast control action in response to local input, and perturbations dictate the use of decentralized information and control structures. This much-cited reference book explores the approaches to synthesizing control laws under decentralized information structure constraints.Starting with a graph-theoretic framework for structural modeling of complex systems, the text presents results related to robust stabilization via decentralized state feedback. Subsequent chapters explore optimization, output feedback, the manipulative power of graphs, overlapping decompositions and t

  18. Determinants of Hospital Casemix Complexity

    Science.gov (United States)

    Becker, Edmund R.; Steinwald, Bruce

    1981-01-01

    Using the Commission on Professional and Hospital Activities' Resource Need Index as a measure of casemix complexity, this paper examines the relative contributions of teaching commitment and other hospital characteristics, hospital service and insurer distributions, and area characteristics to variations in casemix complexity. The empirical estimates indicate that all three types of independent variables have a substantial influence. These results are discussed in light of recent casemix research as well as current policy implications. PMID:6799430

  19. A modeling study of contaminant transport resulting from flooding of Pit 9 at the Radioactive Waste Management Complex, Idaho National Engineering Laboratory

    International Nuclear Information System (INIS)

    Magnuson, S.O.; Sondrup, A.J.

    1992-09-01

    A simulation study was conducted to determine if dissolved-phase transport due to flooding is a viable mechanism for explaining the presence of radionuclides in sedimentary interbeds below the Radioactive Waste Management Complex. In particular, the study focused on 241 Am migration due to flooding of Pit 9 in 1969. A kinetically-controlled source term model was used to estimate the mass of 241 Am that leached as a function of a variable surface infiltration rate. This mass release rate was then used in a numerical simulation of unsaturated flow and transport to estimate the advance due to flooding of the 241 Am front down towards the 110 ft interbed. The simulation included the effect of fractures by superimposing them onto elements that represented the basalt matrix. For the base case, hydraulic and transport parameters were assigned using the best available data. The advance of the 241 Am front due to flooding for this case was minimal, on the order of a few meters. This was due to the strong tendency for 241 Am to sorb onto both basalts and sediments. In addition to the base case simulation, a parametric sensitivity study was conducted which tested the effect of sorption in the fractures, in the kinetic source term, and in the basalt matrix. Of these, the only case which resulted in significant transport was when there was no sorption in the basalt matrix. The indication being that other processes such as transport by radiocolloids or organic complexation may have contributed. However, caution is advised in interpreting these results due to approximations in the numerical method that was used incorporate fractures into the simulation. The approximations are a result of fracture apertures being significantly smaller than the elements over which they are superimposed. The sensitivity of the 241 Am advance to the assumed hydraulic conductivity for the fractures was also tested

  20. Hypoxia targeting copper complexes

    International Nuclear Information System (INIS)

    Dearling, J.L.

    1998-11-01

    selectivity increasing with decreasing reduction potential. A mechanism accounting for the observed results is suggested. A brief survey of the selectivities of other copper complexing ligands (dithiocarbamates, diphosphines and Schiff-bases) is presented, though neither normoxic nor hypoxic selectivity was found. In conclusion a structure-activity relationship exists within this series, and it is possible using these observations to design hypoxia selective copper complexes rationally. (author)

  1. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  2. Complexity in electronic negotiation support systems.

    Science.gov (United States)

    Griessmair, Michele; Strunk, Guido; Vetschera, Rudolf; Koeszegi, Sabine T

    2011-10-01

    It is generally acknowledged that the medium influences the way we communicate and negotiation research directs considerable attention to the impact of different electronic communication modes on the negotiation process and outcomes. Complexity theories offer models and methods that allow the investigation of how pattern and temporal sequences unfold over time in negotiation interactions. By focusing on the dynamic and interactive quality of negotiations as well as the information, choice, and uncertainty contained in the negotiation process, the complexity perspective addresses several issues of central interest in classical negotiation research. In the present study we compare the complexity of the negotiation communication process among synchronous and asynchronous negotiations (IM vs. e-mail) as well as an electronic negotiation support system including a decision support system (DSS). For this purpose, transcripts of 145 negotiations have been coded and analyzed with the Shannon entropy and the grammar complexity. Our results show that negotiating asynchronically via e-mail as well as including a DSS significantly reduces the complexity of the negotiation process. Furthermore, a reduction of the complexity increases the probability of reaching an agreement.

  3. Complexity of the AdS soliton

    Science.gov (United States)

    Reynolds, Alan P.; Ross, Simon F.

    2018-05-01

    We consider the holographic complexity conjectures in the context of the AdS soliton, which is the holographic dual of the ground state of a field theory on a torus with antiperiodic boundary conditions for fermions on one cycle. The complexity is a non-trivial function of the size of the circle with antiperiodic boundary conditions, which sets an IR scale in the dual geometry. We find qualitative differences between the calculations of complexity from spatial volume and action (CV and CA). In the CV calculation, the complexity for antiperiodic boundary conditions is smaller than for periodic, and decreases monotonically with increasing IR scale. In the CA calculation, the complexity for antiperiodic boundary conditions is larger than for periodic, and initially increases with increasing IR scale, eventually decreasing to zero as the IR scale becomes of order the UV cutoff. We compare these results to a simple calculation for free fermions on a lattice, where we find the complexity for antiperiodic boundary conditions is larger than for periodic.

  4. Planning and complexity : Engaging with temporal dynamics, uncertainty and complex adaptive systems

    NARCIS (Netherlands)

    Sengupta, Ulysses; Rauws, Ward S.; de Roo, Gert

    2016-01-01

    The nature of complex systems as a transdisciplinary collection of concepts from physics and economics to sociology and ecology provides an evolving field of inquiry (Laszlo and Krippner, 1998) for urban planning and urban design. As a result, planning theory has assimilated multiple concepts from

  5. Planning and complexity : Engaging with temporal dynamics, uncertainty and complex adaptive systems

    NARCIS (Netherlands)

    Sengupta, Ulysses; Rauws, Ward S.; de Roo, Gert

    The nature of complex systems as a transdisciplinary collection of concepts from physics and economics to sociology and ecology provides an evolving field of inquiry (Laszlo and Krippner, 1998) for urban planning and urban design. As a result, planning theory has assimilated multiple concepts from

  6. Complex dynamical invariants for two-dimensional complex potentials

    Indian Academy of Sciences (India)

    Abstract. Complex dynamical invariants are searched out for two-dimensional complex poten- tials using rationalization method within the framework of an extended complex phase space characterized by x = x1 + ip3, y = x2 + ip4, px = p1 + ix3, py = p2 + ix4. It is found that the cubic oscillator and shifted harmonic oscillator ...

  7. Complex Fuzzy Set-Valued Complex Fuzzy Measures and Their Properties

    Science.gov (United States)

    Ma, Shengquan; Li, Shenggang

    2014-01-01

    Let F*(K) be the set of all fuzzy complex numbers. In this paper some classical and measure-theoretical notions are extended to the case of complex fuzzy sets. They are fuzzy complex number-valued distance on F*(K), fuzzy complex number-valued measure on F*(K), and some related notions, such as null-additivity, pseudo-null-additivity, null-subtraction, pseudo-null-subtraction, autocontionuous from above, autocontionuous from below, and autocontinuity of the defined fuzzy complex number-valued measures. Properties of fuzzy complex number-valued measures are studied in detail. PMID:25093202

  8. Neurosurgical implications of Carney complex.

    Science.gov (United States)

    Watson, J C; Stratakis, C A; Bryant-Greenwood, P K; Koch, C A; Kirschner, L S; Nguyen, T; Carney, J A; Oldfield, E H

    2000-03-01

    The authors present their neurosurgical experience with Carney complex. Carney complex, characterized by spotty skin pigmentation, cardiac myxomas, primary pigmented nodular adrenocortical disease, pituitary tumors, and nerve sheath tumors (NSTs), is a recently described, rare, autosomal-dominant familial syndrome that is relatively unknown to neurosurgeons. Neurosurgery is required to treat pituitary adenomas and a rare NST, the psammomatous melanotic schwannoma (PMS), in patients with Carney complex. Cushing's syndrome, a common component of the complex, is caused by primary pigmented nodular adrenocortical disease and is not secondary to an adrenocorticotropic hormone-secreting pituitary adenoma. The authors reviewed 14 cases of Carney complex, five from the literature and nine from their own experience. Of the 14 pituitary adenomas recognized in association with Carney complex, 12 developed growth hormone (GH) hypersecretion (producing gigantism in two patients and acromegaly in 10), and results of immunohistochemical studies in one of the other two were positive for GH. The association of PMSs with Carney complex was established in 1990. Of the reported tumors, 28% were associated with spinal nerve sheaths. The spinal tumors occurred in adults (mean age 32 years, range 18-49 years) who presented with pain and radiculopathy. These NSTs may be malignant (10%) and, as with the cardiac myxomas, are associated with significant rates of morbidity and mortality. Because of the surgical comorbidity associated with cardiac myxoma and/or Cushing's syndrome, recognition of Carney complex has important implications for perisurgical patient management and family screening. Study of the genetics of Carney complex and of the biological abnormalities associated with the tumors may provide insight into the general pathobiological abnormalities associated with the tumors may provide insight into the general pathobiological features of pituitary adenomas and NSTs.

  9. Timing-Driven-Testable Convergent Tree Adders

    Directory of Open Access Journals (Sweden)

    Johnnie A. Huang

    2002-01-01

    Full Text Available Carry lookahead adders have been, over the years, implemented in complex arithmetic units due to their regular structure which leads to efficient VLSI implementation for fast adders. In this paper, timing-driven testability synthesis is first performed on a tree adder. It is shown that the structure of the tree adder provides for a high fanout with an imbalanced tree structure, which likely contributes to a racing effect and increases the delay of the circuit. The timing optimization is then realized by reducing the maximum fanout of the adder and by balancing the tree circuit. For a 56-b testable tree adder, the optimization produces a 6.37%increase in speed of the critical path while only contributing a 2.16% area overhead. The full testability of the circuit is achieved in the optimized adder design.

  10. The effect of structural design parameters on FPGA-based feed-forward space-time trellis coding-orthogonal frequency division multiplexing channel encoders

    Science.gov (United States)

    Passas, Georgios; Freear, Steven; Fawcett, Darren

    2010-08-01

    Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.

  11. Smart vision chips: An overview

    Science.gov (United States)

    Koch, Christof

    1994-01-01

    This viewgraph presentation presents four working analog VLSI vision chips: (1) time-derivative retina, (2) zero-crossing chip, (3) resistive fuse, and (4) figure-ground chip; work in progress on computing motion and neuromorphic systems; and conceptual and practical lessons learned.

  12. Productivity and data processing: two essentials for a dynamic company. Proceedings of the spring convention

    Energy Technology Data Exchange (ETDEWEB)

    1983-01-01

    The following topics were dealt with: office automation markets; future developments; software package market; telecommunication for big organizations; DP security; robotics; teletel; EDP management; role of VLSI; peripheral equipment; systems architecture; artificial intelligence; expert systems; warehouse automation; and microcomputer techniques.

  13. Design study of a low-power, low-noise front-end for multianode silicon drift detectors

    International Nuclear Information System (INIS)

    Caponetto, L.; Presti, D. Lo; Randazzo, N.; Russo, G.V.; Leonora, E.; Lo Nigro, L.; Petta, C.; Reito, S.; Sipala, V.

    2005-01-01

    The read-out for Silicon Drift Detectors in the form of a VLSI chip is presented, with a view to applications in High Energy Physics and space experiments. It is characterised by extremely low power dissipation, small noise and size

  14. Switched-capacitor techniques for high-accuracy filter and ADC design

    NARCIS (Netherlands)

    Quinn, P.J.; Roermund, van A.H.M.

    2007-01-01

    Switched capacitor (SC) techniques are well proven to be excellent candidates for implementing critical analogue functions with high accuracy, surpassing other analogue techniques when embedded in mixed-signal CMOS VLSI. Conventional SC circuits are primarily limited in accuracy by a) capacitor

  15. Economic Value Creation in Metro Complexes: Case Study on Sadr Station Complex in Tehran

    Directory of Open Access Journals (Sweden)

    Nima Jafari

    2017-04-01

    Full Text Available The main objective of this research is economic value creation methods in metro station centers with the case study of Sadr Station complex in Tehran. The research implements a descriptive approach by benefiting from the data of a cross-sectional survey which was collected by the authors. The target population included all scholars of urban development and transport academics, capitalists and directors of the station complex with the total number of 1,100 people. By using a random sampling, 285 people were surveyed with a 25-item questionnaire developed by the researchers. The results suggest priority of value creation respectively in areas of collaborative, competitive, private, governmental, and personal. The test results also showed that among the components of economic value creation (corporate, individual, competitive, governmental and private, the observed correlation was significant. According to the obtained results, development of economic value creation in station centers seems necessary.

  16. The complex portal--an encyclopaedia of macromolecular complexes.

    Science.gov (United States)

    Meldal, Birgit H M; Forner-Martinez, Oscar; Costanzo, Maria C; Dana, Jose; Demeter, Janos; Dumousseau, Marine; Dwight, Selina S; Gaulton, Anna; Licata, Luana; Melidoni, Anna N; Ricard-Blum, Sylvie; Roechert, Bernd; Skyzypek, Marek S; Tiwari, Manu; Velankar, Sameer; Wong, Edith D; Hermjakob, Henning; Orchard, Sandra

    2015-01-01

    The IntAct molecular interaction database has created a new, free, open-source, manually curated resource, the Complex Portal (www.ebi.ac.uk/intact/complex), through which protein complexes from major model organisms are being collated and made available for search, viewing and download. It has been built in close collaboration with other bioinformatics services and populated with data from ChEMBL, MatrixDB, PDBe, Reactome and UniProtKB. Each entry contains information about the participating molecules (including small molecules and nucleic acids), their stoichiometry, topology and structural assembly. Complexes are annotated with details about their function, properties and complex-specific Gene Ontology (GO) terms. Consistent nomenclature is used throughout the resource with systematic names, recommended names and a list of synonyms all provided. The use of the Evidence Code Ontology allows us to indicate for which entries direct experimental evidence is available or if the complex has been inferred based on homology or orthology. The data are searchable using standard identifiers, such as UniProt, ChEBI and GO IDs, protein, gene and complex names or synonyms. This reference resource will be maintained and grow to encompass an increasing number of organisms. Input from groups and individuals with specific areas of expertise is welcome. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Cationic Amphiphilic Tris-Cyclometalated Iridium(III) Complexes Induce Cancer Cell Death via Interaction with Ca2+-Calmodulin Complex.

    Science.gov (United States)

    Hisamatsu, Yosuke; Suzuki, Nozomi; Masum, Abdullah-Al; Shibuya, Ai; Abe, Ryo; Sato, Akira; Tanuma, Sei-Ichi; Aoki, Shin

    2017-02-15

    In our previous paper, we reported on the preparation of some cationic amphiphilic Ir complexes (2c, 2d) containing KKGG peptides that induce and detect cell death of Jurkat cells. Mechanistic studies suggest that 2c interacts with anionic molecules and/or membrane receptors on the cell surface to trigger an intracellular Ca 2+ response, resulting in the induction of cell death, accompanied by membrane disruption. We have continued the studies of cell death of Jurkat cells induced by 2c and found that xestospongin C, a selective inhibitor of an inositol 1,4,5-trisphosphate receptor located on the endoplasmic reticulum (ER), reduces the cytotoxicity of 2c, suggesting that 2c triggers the release of Ca 2+ from the ER, leading to an increase in the concentration of cytosolic Ca 2+ , thus inducing cell death. Moreover, we synthesized a series of new amphiphilic cationic Ir complexes 5a-c containing photoreactive 3-trifluoromethyl-3-phenyldiazirine (TFPD) groups, in an attempt to identify the target molecules of 2c. Interestingly, it was discovered that a TFPD group functions as a triplet quencher of Ir complexes. It was also found that 5b is useful as a turn-on phosphorescent probe of acidic proteins such as bovine serum albumin (BSA) (pI = 4.7) and their complexation was confirmed by luminescence titrations and SDS-PAGE of photochemical products between them. These successful results allowed us to carry out photoaffinity labeling of the target biomolecules of 5b (2c and analogues thereof) in Jurkat cells. A proteomic analysis of the products obtained by the photoirradiation of 5b with Jurkat cells suggests that the Ca 2+ -binding protein "calmodulin (CaM)" is one of target proteins of the Ir complexes. Indeed, 5b was found to interact with the Ca 2+ -CaM complex, as evidenced by luminescence titrations and the results of photochemical reactions of 5b with CaM in the presence of Ca 2+ (SDS-PAGE). A plausible mechanism for cell death induced by a cationic amphiphilic Ir

  18. CARS spectroscopy of the NaH2 collision complex: The nature of the Na(3 2P)H2 exciplex - ab initio calculations and experimental results

    International Nuclear Information System (INIS)

    Vivie-Riedle, R. de; Hering, P.; Kompa, K.L.

    1990-01-01

    CARS has been used to analyze the rovibronic state distribution of H 2 after collision with Na(3 2 P). New lines, which do not correspond to H 2 lines are observed in the CARS spectrum. The experiments point to the formation of a complex of Na(3 2 P)H 2 in A 2 B 2 symmetry. Ab initio calculations of the A 2 B 2 potential were performed. On this surface the vibrational spectra of the exciplex is evaluated. The observed lines can be attributed to vibrational transitions in the complex, in which combinational modes are involved. The connection of experimental and theoretical results indicates that a collisionally stabilized exciplex molecule is formed during the quenching process. (orig.)

  19. Quantum Kolmogorov complexity and the quantum Turing machine

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, M.

    2007-08-31

    The purpose of this thesis is to give a formal definition of quantum Kolmogorov complexity and rigorous mathematical proofs of its basic properties. Classical Kolmogorov complexity is a well-known and useful measure of randomness for binary strings. In recent years, several different quantum generalizations of Kolmogorov complexity have been proposed. The most natural generalization is due to A. Berthiaume et al. (2001), defining the complexity of a quantum bit (qubit) string as the length of the shortest quantum input for a universal quantum computer that outputs the desired string. Except for slight modifications, it is this definition of quantum Kolmogorov complexity that we study in this thesis. We start by analyzing certain aspects of the underlying quantum Turing machine (QTM) model in a more detailed formal rigour than was done previously. Afterwards, we apply these results to quantum Kolmogorov complexity. Our first result is a proof of the existence of a universal QTM which simulates every other QTM for an arbitrary number of time steps and than halts with probability one. In addition, we show that every input that makes a QTM almost halt can be modified to make the universal QTM halt entirely, by adding at most a constant number of qubits. It follows that quantum Kolmogorov complexity has the invariance property, i.e. it depends on the choice of the universal QTM only up to an additive constant. Moreover, the quantum complexity of classical strings agrees with classical complexity, again up to an additive constant. The proofs are based on several analytic estimates. Furthermore, we prove several incompressibility theorems for quantum Kolmogorov complexity. Finally, we show that for ergodic quantum information sources, complexity rate and entropy rate coincide with probability one. The thesis is finished with an outlook on a possible application of quantum Kolmogorov complexity in statistical mechanics. (orig.)

  20. Complexity Plots

    KAUST Repository

    Thiyagalingam, Jeyarajan

    2013-06-01

    In this paper, we present a novel visualization technique for assisting the observation and analysis of algorithmic complexity. In comparison with conventional line graphs, this new technique is not sensitive to the units of measurement, allowing multivariate data series of different physical qualities (e.g., time, space and energy) to be juxtaposed together conveniently and consistently. It supports multivariate visualization as well as uncertainty visualization. It enables users to focus on algorithm categorization by complexity classes, while reducing visual impact caused by constants and algorithmic components that are insignificant to complexity analysis. It provides an effective means for observing the algorithmic complexity of programs with a mixture of algorithms and black-box software through visualization. Through two case studies, we demonstrate the effectiveness of complexity plots in complexity analysis in research, education and application. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  1. Corrective Measures Study Modeling Results for the Southwest Plume - Burial Ground Complex/Mixed Waste Management Facility

    International Nuclear Information System (INIS)

    Harris, M.K.

    1999-01-01

    Groundwater modeling scenarios were performed to support the Corrective Measures Study and Interim Action Plan for the southwest plume of the Burial Ground Complex/Mixed Waste Management Facility. The modeling scenarios were designed to provide data for an economic analysis of alternatives, and subsequently evaluate the effectiveness of the selected remedial technologies for tritium reduction to Fourmile Branch. Modeling scenarios assessed include no action, vertical barriers, pump, treat, and reinject; and vertical recirculation wells

  2. Epidemic processes in complex networks

    Science.gov (United States)

    Pastor-Satorras, Romualdo; Castellano, Claudio; Van Mieghem, Piet; Vespignani, Alessandro

    2015-07-01

    In recent years the research community has accumulated overwhelming evidence for the emergence of complex and heterogeneous connectivity patterns in a wide range of biological and sociotechnical systems. The complex properties of real-world networks have a profound impact on the behavior of equilibrium and nonequilibrium phenomena occurring in various systems, and the study of epidemic spreading is central to our understanding of the unfolding of dynamical processes in complex networks. The theoretical analysis of epidemic spreading in heterogeneous networks requires the development of novel analytical frameworks, and it has produced results of conceptual and practical relevance. A coherent and comprehensive review of the vast research activity concerning epidemic processes is presented, detailing the successful theoretical approaches as well as making their limits and assumptions clear. Physicists, mathematicians, epidemiologists, computer, and social scientists share a common interest in studying epidemic spreading and rely on similar models for the description of the diffusion of pathogens, knowledge, and innovation. For this reason, while focusing on the main results and the paradigmatic models in infectious disease modeling, the major results concerning generalized social contagion processes are also presented. Finally, the research activity at the forefront in the study of epidemic spreading in coevolving, coupled, and time-varying networks is reported.

  3. Moessbauer spectroscopic studies of alkylammonium iron(III) complexes

    International Nuclear Information System (INIS)

    Katada, M.; Kozawa, S.; Nakajima, Y.

    2006-01-01

    Alkylammonium iron(III) complexes, [(n-C n H 2n+1 )mNH 4-m ] 3 [Fe(CN) 6 ] were prepared and studied by Moessbauer spectroscopy, XRD, and DSC. In the complexes with m=2, the temperature dependences of the area intensity of Moessbauer are correlated to the motion of alkyl chains. The temperature dependence of the complex with n=4 was linear and smaller than that of other complexes. Especially in the complex with n=6, the deviation from the linear was the largest in the complexes observed. This result is attributed to the structural difference of the complex. The complexes with n≥8 consist of two-dimensional layer structure. The temperature dependence of the area intensity was similar to each other. This means that the motion of alkyl chain in these complexes are almost the same. The values of quadrupole splitting for the complexes were larger those that of the complexes (m=1). This indicates that the form of [Fe(CN) 6 ] 3- ion is affected by the differences of the number of alkyl groups. (author)

  4. Complexity theory in the management of communicable diseases.

    Science.gov (United States)

    Simmons, Mike

    2003-06-01

    In nature, apparently complex behavioural patterns are the result of repetitive simple rules. Complexity science studies the application of these rules and looks for applications in society. Complexity management opportunities have developed from this science and are providing a revolutionary approach in the constantly changing workplace. This article discusses how complexity management techniques have already been applied to communicable disease management in Wales and suggests further developments. A similar approach is recommended to others in the field, while complexity management probably has wider applications in the NHS, not least in relation to the developing managed clinical networks.

  5. Superspace de Rham complex and relative cohomology

    Energy Technology Data Exchange (ETDEWEB)

    III, William D. Linch; Randall, Stephen [Center for String and Particle Theory,Department of Physics, University of Maryland at College Park,College Park, MD 20742-4111 (United States)

    2015-09-28

    We investigate the super-de Rham complex of five-dimensional superforms with N=1 supersymmetry. By introducing a free supercommutative algebra of auxiliary variables, we show that this complex is equivalent to the Chevalley-Eilenberg complex of the translation supergroup with values in superfields. Each cocycle of this complex is defined by a Lorentz- and iso-spin-irreducible superfield subject to a set of constraints. Restricting to constant coefficients results in a subcomplex in which components of the cocycles are coboundaries while the constraints on the defining superfields span the cohomology. This reduces the computation of all of the superspace Bianchi identities to a single linear algebra problem the solution of which implies new features not present in the standard four-dimensional, N=1 complex. These include splitting/joining in the complex and the existence of cocycles that do not correspond to irreducible supermultiplets of closed differential forms. Interpreting the five-dimensional de Rham complex as arising from dimensional reduction from the six-dimensional complex, we find a second five-dimensional complex associated to the relative de Rham complex of the embedding of the latter in the former. This gives rise to a second source of closed differential forms previously attributed to the phenomenon called “Weyl triviality”.

  6. Complexation of buffer constituents with neutral complexation agents: part I. Impact on common buffer properties.

    Science.gov (United States)

    Riesová, Martina; Svobodová, Jana; Tošner, Zdeněk; Beneš, Martin; Tesařová, Eva; Gaš, Bohuslav

    2013-09-17

    The complexation of buffer constituents with the complexation agent present in the solution can very significantly influence the buffer properties, such as pH, ionic strength, or conductivity. These parameters are often crucial for selection of the separation conditions in capillary electrophoresis or high-pressure liquid chromatography (HPLC) and can significantly affect results of separation, particularly for capillary electrophoresis as shown in Part II of this paper series (Beneš, M.; Riesová, M.; Svobodová, J.; Tesařová, E.; Dubský, P.; Gaš, B. Anal. Chem. 2013, DOI: 10.1021/ac401381d). In this paper, the impact of complexation of buffer constituents with a neutral complexation agent is demonstrated theoretically as well as experimentally for the model buffer system composed of benzoic acid/LiOH or common buffers (e.g., CHES/LiOH, TAPS/LiOH, Tricine/LiOH, MOPS/LiOH, MES/LiOH, and acetic acid/LiOH). Cyclodextrins as common chiral selectors were used as model complexation agents. We were not only able to demonstrate substantial changes of pH but also to predict the general complexation characteristics of selected compounds. Because of the zwitterion character of the common buffer constituents, their charged forms complex stronger with cyclodextrins than the neutral ones do. This was fully proven by NMR measurements. Additionally complexation constants of both forms of selected compounds were determined by NMR and affinity capillary electrophoresis with a very good agreement of obtained values. These data were advantageously used for the theoretical descriptions of variations in pH, depending on the composition and concentration of the buffer. Theoretical predictions were shown to be a useful tool for deriving some general rules and laws for complexing systems.

  7. Vibrations and stability of complex beam systems

    CERN Document Server

    Stojanović, Vladimir

    2015-01-01

     This book reports on solved problems concerning vibrations and stability of complex beam systems. The complexity of a system is considered from two points of view: the complexity originating from the nature of the structure, in the case of two or more elastically connected beams; and the complexity derived from the dynamic behavior of the system, in the case of a damaged single beam, resulting from the harm done to its simple structure. Furthermore, the book describes the analytical derivation of equations of two or more elastically connected beams, using four different theories (Euler, Rayleigh, Timoshenko and Reddy-Bickford). It also reports on a new, improved p-version of the finite element method for geometrically nonlinear vibrations. The new method provides more accurate approximations of solutions, while also allowing us to analyze geometrically nonlinear vibrations. The book describes the appearance of longitudinal vibrations of damaged clamped-clamped beams as a result of discontinuity (damage). It...

  8. VLSI implementation of a bio-inspired olfactory spiking neural network.

    Science.gov (United States)

    Hsieh, Hung-Yi; Tang, Kea-Tiong

    2012-07-01

    This paper presents a low-power, neuromorphic spiking neural network (SNN) chip that can be integrated in an electronic nose system to classify odor. The proposed SNN takes advantage of sub-threshold oscillation and onset-latency representation to reduce power consumption and chip area, providing a more distinct output for each odor input. The synaptic weights between the mitral and cortical cells are modified according to an spike-timing-dependent plasticity learning rule. During the experiment, the odor data are sampled by a commercial electronic nose (Cyranose 320) and are normalized before training and testing to ensure that the classification result is only caused by learning. Measurement results show that the circuit only consumed an average power of approximately 3.6 μW with a 1-V power supply to discriminate odor data. The SNN has either a high or low output response for a given input odor, making it easy to determine whether the circuit has made the correct decision. The measurement result of the SNN chip and some well-known algorithms (support vector machine and the K-nearest neighbor program) is compared to demonstrate the classification performance of the proposed SNN chip.The mean testing accuracy is 87.59% for the data used in this paper.

  9. Complex differential geometry

    CERN Document Server

    Zheng, Fangyang

    2002-01-01

    The theory of complex manifolds overlaps with several branches of mathematics, including differential geometry, algebraic geometry, several complex variables, global analysis, topology, algebraic number theory, and mathematical physics. Complex manifolds provide a rich class of geometric objects, for example the (common) zero locus of any generic set of complex polynomials is always a complex manifold. Yet complex manifolds behave differently than generic smooth manifolds; they are more coherent and fragile. The rich yet restrictive character of complex manifolds makes them a special and interesting object of study. This book is a self-contained graduate textbook that discusses the differential geometric aspects of complex manifolds. The first part contains standard materials from general topology, differentiable manifolds, and basic Riemannian geometry. The second part discusses complex manifolds and analytic varieties, sheaves and holomorphic vector bundles, and gives a brief account of the surface classifi...

  10. Analysis of nanoparticle biomolecule complexes.

    Science.gov (United States)

    Gunnarsson, Stefán B; Bernfur, Katja; Mikkelsen, Anders; Cedervall, Tommy

    2018-03-01

    Nanoparticles exposed to biological fluids adsorb biomolecules on their surface forming a biomolecular corona. This corona determines, on a molecular level, the interactions and impact the newly formed complex has on cells and organisms. The corona formation as well as the physiological and toxicological relevance are commonly investigated. However, an acknowledged but rarely addressed problem in many fields of nanobiotechnology is aggregation and broadened size distribution of nanoparticles following their interactions with the molecules of biological fluids. In blood serum, TiO 2 nanoparticles form complexes with a size distribution from 30 nm to more than 500 nm. In this study we have separated these complexes, with good resolution, using preparative centrifugation in a sucrose gradient. Two main apparent size populations were obtained, a fast sedimenting population of complexes that formed a pellet in the preparative centrifugation tube, and a slow sedimenting complex population still suspended in the gradient after centrifugation. Concentration and surface area dependent differences are found in the biomolecular corona between the slow and fast sedimenting fractions. There are more immunoglobulins, lipid binding proteins, and lipid-rich complexes at higher serum concentrations. Sedimentation rate and the biomolecular corona are important factors for evaluating any experiment including nanoparticle exposure. Our results show that traditional description of nanoparticles in biological fluids is an oversimplification and that more thorough characterisations are needed.

  11. Lie group model neuromorphic geometric engine for real-time terrain reconstruction from stereoscopic aerial photos

    Science.gov (United States)

    Tsao, Thomas R.; Tsao, Doris

    1997-04-01

    In the 1980's, neurobiologist suggested a simple mechanism in primate visual cortex for maintaining a stable and invariant representation of a moving object. The receptive field of visual neurons has real-time transforms in response to motion, to maintain a stable representation. When the visual stimulus is changed due to motion, the geometric transform of the stimulus triggers a dual transform of the receptive field. This dual transform in the receptive fields compensates geometric variation in the stimulus. This process can be modelled using a Lie group method. The massive array of affine parameter sensing circuits will function as a smart sensor tightly coupled to the passive imaging sensor (retina). Neural geometric engine is a neuromorphic computing device simulating our Lie group model of spatial perception of primate's primal visual cortex. We have developed the computer simulation and experimented on realistic and synthetic image data, and performed a preliminary research of using analog VLSI technology for implementation of the neural geometric engine. We have benchmark tested on DMA's terrain data with their result and have built an analog integrated circuit to verify the computational structure of the engine. When fully implemented on ANALOG VLSI chip, we will be able to accurately reconstruct a 3D terrain surface in real-time from stereoscopic imagery.

  12. Real-time qualitative reasoning for telerobotic systems

    Science.gov (United States)

    Pin, Eancois G.

    1993-01-01

    This paper discusses the sensor-based telerobotic driving of a car in a-priori unknown environments using 'human-like' reasoning schemes implemented on custom-designed VLSI fuzzy inferencing boards. These boards use the Fuzzy Set theoretic framework to allow very vast (30 kHz) processing of full sets of information that are expressed in qualitative form using membership functions. The sensor-based and fuzzy inferencing system was incorporated on an outdoor test-bed platform to investigate two control modes for driving a car on the basis of very sparse and imprecise range data. In the first mode, the car navigates fully autonomously to a goal specified by the operator, while in the second mode, the system acts as a telerobotic driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right, speed up, slow down, stop, or back up depending on the obstacles perceived by the sensors. Indoor and outdoor experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Sample results are presented that illustrate the feasibility of developing autonomous navigation modules and robust, safety-enhancing driver's aids for telerobotic systems using the new fuzzy inferencing VLSI hardware and 'human-like' reasoning schemes.

  13. Control theory-based regulation of hippocampal CA1 nonlinear dynamics.

    Science.gov (United States)

    Hsiao, Min-Chi; Song, Dong; Berger, Theodore W

    2008-01-01

    We are developing a biomimetic electronic neural prosthesis to replace regions of the hippocampal brain area that have been damaged by disease or insult. Our previous study has shown that the VLSI implementation of a CA3 nonlinear dynamic model can functionally replace the CA3 subregion of the hippocampal slice. As a result, the propagation of temporal patterns of activity from DG-->VLSI-->CA1 reproduces the activity observed experimentally in the biological DG-->CA3-->CA1 circuit. In this project, we incorporate an open-loop controller to optimize the output (CA1) response. Specifically, we seek to optimize the stimulation signal to CA1 using a predictive dentate gyrus (DG)-CA1 nonlinear model (i.e., DG-CA1 trajectory model) and a CA1 input-output model (i.e., CA1 plant model), such that the ultimate CA1 response (i.e., desired output) can be first predicted by the DG-CA1 trajectory model and then transformed to the desired stimulation through the inversed CA1 plant model. Lastly, the desired CA1 output is evoked by the estimated optimal stimulation. This study will be the first stage of formulating an integrated modeling-control strategy for the hippocampal neural prosthetic system.

  14. 1 million-Q optomechanical microdisk resonators for sensing with very large scale integration

    Science.gov (United States)

    Hermouet, M.; Sansa, M.; Banniard, L.; Fafin, A.; Gely, M.; Allain, P. E.; Santos, E. Gil; Favero, I.; Alava, T.; Jourdan, G.; Hentz, S.

    2018-02-01

    Cavity optomechanics have become a promising route towards the development of ultrasensitive sensors for a wide range of applications including mass, chemical and biological sensing. In this study, we demonstrate the potential of Very Large Scale Integration (VLSI) with state-of-the-art low-loss performance silicon optomechanical microdisks for sensing applications. We report microdisks exhibiting optical Whispering Gallery Modes (WGM) with 1 million quality factors, yielding high displacement sensitivity and strong coupling between optical WGMs and in-plane mechanical Radial Breathing Modes (RBM). Such high-Q microdisks with mechanical resonance frequencies in the 102 MHz range were fabricated on 200 mm wafers with Variable Shape Electron Beam lithography. Benefiting from ultrasensitive readout, their Brownian motion could be resolved with good Signal-to-Noise ratio at ambient pressure, as well as in liquid, despite high frequency operation and large fluidic damping: the mechanical quality factor reduced from few 103 in air to 10's in liquid, and the mechanical resonance frequency shifted down by a few percent. Proceeding one step further, we performed an all-optical operation of the resonators in air using a pump-probe scheme. Our results show our VLSI process is a viable approach for the next generation of sensors operating in vacuum, gas or liquid phase.

  15. Modeling Complex Workflow in Molecular Diagnostics

    Science.gov (United States)

    Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan

    2010-01-01

    One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844

  16. (II) complexes

    African Journals Online (AJOL)

    activities of Schiff base tin (II) complexes. Neelofar1 ... Conclusion: All synthesized Schiff bases and their Tin (II) complexes showed high antimicrobial and ...... Singh HL. Synthesis and characterization of tin (II) complexes of fluorinated Schiff bases derived from amino acids. Spectrochim Acta Part A: Molec Biomolec.

  17. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith

    2008-09-01

    The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

  18. Subunit stoichiometry of the chloroplast photosystem I complex

    International Nuclear Information System (INIS)

    Bruce, B.D.; Malkin, R.

    1988-01-01

    A native photosystem I (PS I) complex and a PS I core complex depleted of antenna subunits has been isolated from the uniformly 14 C-labeled aquatic higher plant, Lemna. These complexes have been analyzed for their subunit stoichiometry by quantitative sodium dodecyl sulfate-polyacrylamide gel electrophoresis methods. The results for both preparations indicate that one copy of each high molecular mass subunit is present per PS I complex and that a single copy of most low molecular mass subunits is also present. These results suggest that iron-sulfur center X, an early PS I electron acceptor proposed to bind to the high molecular mass subunits, contains a single [4Fe-4S] cluster which is bound to a dimeric structure of high molecular mass subunits, each providing 2 cysteine residues to coordinate this cluster

  19. Applied complex variables for scientists and engineers

    CERN Document Server

    Kwok, Yue Kuen

    2010-01-01

    This introduction to complex variable methods begins by carefully defining complex numbers and analytic functions, and proceeds to give accounts of complex integration, Taylor series, singularities, residues and mappings. Both algebraic and geometric tools are employed to provide the greatest understanding, with many diagrams illustrating the concepts introduced. The emphasis is laid on understanding the use of methods, rather than on rigorous proofs. Throughout the text, many of the important theoretical results in complex function theory are followed by relevant and vivid examples in physical sciences. This second edition now contains 350 stimulating exercises of high quality, with solutions given to many of them. Material has been updated and additional proofs on some of the important theorems in complex function theory are now included, e.g. the Weierstrass–Casorati theorem. The book is highly suitable for students wishing to learn the elements of complex analysis in an applied context.

  20. Workshop on Recommendation in Complex Scenarios (ComplexRec 2017)

    DEFF Research Database (Denmark)

    Bogers, Toine; Koolen, Marijn; Mobasher, Bamshad

    2017-01-01

    Recommendation algorithms for ratings prediction and item ranking have steadily matured during the past decade. However, these state-of-the-art algorithms are typically applied in relatively straightforward scenarios. In reality, recommendation is often a more complex problem: it is usually just...... a single step in the user's more complex background need. These background needs can often place a variety of constraints on which recommendations are interesting to the user and when they are appropriate. However, relatively little research has been done on these complex recommendation scenarios....... The ComplexRec 2017 workshop addressed this by providing an interactive venue for discussing approaches to recommendation in complex scenarios that have no simple one-size-fits-all-solution....