WorldWideScience

Sample records for vlsi complexity results

  1. Lithography requirements in complex VLSI device fabrication

    International Nuclear Information System (INIS)

    Wilson, A.D.

    1985-01-01

    Fabrication of complex very large scale integration (VLSI) circuits requires continual advances in lithography to satisfy: decreasing minimum linewidths, larger chip sizes, tighter linewidth and overlay control, increasing topography to linewidth ratios, higher yield demands, increased throughput, harsher device processing, lower lithography cost, and a larger part number set with quick turn-around time. Where optical, electron beam, x-ray, and ion beam lithography can be applied to judiciously satisfy the complex VLSI circuit fabrication requirements is discussed and those areas that are in need of major further advances are addressed. Emphasis will be placed on advanced electron beam and storage ring x-ray lithography

  2. VLSI design

    CERN Document Server

    Basu, D K

    2014-01-01

    Very Large Scale Integrated Circuits (VLSI) design has moved from costly curiosity to an everyday necessity, especially with the proliferated applications of embedded computing devices in communications, entertainment and household gadgets. As a result, more and more knowledge on various aspects of VLSI design technologies is becoming a necessity for the engineering/technology students of various disciplines. With this goal in mind the course material of this book has been designed to cover the various fundamental aspects of VLSI design, like Categorization and comparison between various technologies used for VLSI design Basic fabrication processes involved in VLSI design Design of MOS, CMOS and Bi CMOS circuits used in VLSI Structured design of VLSI Introduction to VHDL for VLSI design Automated design for placement and routing of VLSI systems VLSI testing and testability The various topics of the book have been discussed lucidly with analysis, when required, examples, figures and adequate analytical and the...

  3. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  4. Design Implementation and Testing of a VLSI High Performance ASIC for Extracting the Phase of a Complex Signal

    National Research Council Canada - National Science Library

    Altmeyer, Ronald

    2002-01-01

    This thesis documents the research, circuit design, and simulation testing of a VLSI ASIC which extracts phase angle information from a complex sampled signal using the arctangent relationship: (phi=tan/-1 (Q/1...

  5. VLSI Architecture for Configurable and Low-Complexity Design of Hard-Decision Viterbi Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2016-06-01

    Full Text Available Convolutional encoding and data decoding are fundamental processes in convolutional error correction. One of the most popular error correction methods in decoding is the Viterbi algorithm. It is extensively implemented in many digital communication applications. Its VLSI design challenges are about area, speed, power, complexity and configurability. In this research, we specifically propose a VLSI architecture for a configurable and low-complexity design of a hard-decision Viterbi decoding algorithm. The configurable and low-complexity design is achieved by designing a generic VLSI architecture, optimizing each processing element (PE at the logical operation level and designing a conditional adapter. The proposed design can be configured for any predefined number of trace-backs, only by changing the trace-back parameter value. Its computational process only needs N + 2 clock cycles latency, with N is the number of trace-backs. Its configurability function has been proven for N = 8, N = 16, N = 32 and N = 64. Furthermore, the proposed design was synthesized and evaluated in Xilinx and Altera FPGA target boards for area consumption and speed performance.

  6. Towards an Analogue Neuromorphic VLSI Instrument for the Sensing of Complex Odours

    Science.gov (United States)

    Ab Aziz, Muhammad Fazli; Harun, Fauzan Khairi Che; Covington, James A.; Gardner, Julian W.

    2011-09-01

    Almost all electronic nose instruments reported today employ pattern recognition algorithms written in software and run on digital processors, e.g. micro-processors, microcontrollers or FPGAs. Conversely, in this paper we describe the analogue VLSI implementation of an electronic nose through the design of a neuromorphic olfactory chip. The modelling, design and fabrication of the chip have already been reported. Here a smart interface has been designed and characterised for thisneuromorphic chip. Thus we can demonstrate the functionality of the a VLSI neuromorphic chip, producing differing principal neuron firing patterns to real sensor response data. Further work is directed towards integrating 9 separate neuromorphic chips to create a large neuronal network to solve more complex olfactory problems.

  7. VLSI design

    CERN Document Server

    Einspruch, Norman G

    1986-01-01

    VLSI Electronics Microstructure Science, Volume 14: VLSI Design presents a comprehensive exposition and assessment of the developments and trends in VLSI (Very Large Scale Integration) electronics. This volume covers topics that range from microscopic aspects of materials behavior and device performance to the comprehension of VLSI in systems applications. Each article is prepared by a recognized authority. The subjects discussed in this book include VLSI processor design methodology; the RISC (Reduced Instruction Set Computer); the VLSI testing program; silicon compilers for VLSI; and special

  8. A new VLSI complex integer multiplier which uses a quadratic-polynomial residue system with Fermat numbers

    Science.gov (United States)

    Shyu, H. C.; Reed, I. S.; Truong, T. K.; Hsu, I. S.; Chang, J. J.

    1987-01-01

    A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-Pw technology.

  9. VLSI Implementation of a Fixed-Complexity Soft-Output MIMO Detector for High-Speed Wireless

    Directory of Open Access Journals (Sweden)

    Di Wu

    2010-01-01

    Full Text Available This paper presents a low-complexity MIMO symbol detector with close-Maximum a posteriori performance for the emerging multiantenna enhanced high-speed wireless communications. The VLSI implementation is based on a novel MIMO detection algorithm called Modified Fixed-Complexity Soft-Output (MFCSO detection, which achieves a good trade-off between performance and implementation cost compared to the referenced prior art. By including a microcode-controlled channel preprocessing unit and a pipelined detection unit, it is flexible enough to cover several different standards and transmission schemes. The flexibility allows adaptive detection to minimize power consumption without degradation in throughput. The VLSI implementation of the detector is presented to show that real-time MIMO symbol detection of 20 MHz bandwidth 3GPP LTE and 10 MHz WiMAX downlink physical channel is achievable at reasonable silicon cost.

  10. VLSI design

    CERN Document Server

    Chandrasetty, Vikram Arkalgud

    2011-01-01

    This book provides insight into the practical design of VLSI circuits. It is aimed at novice VLSI designers and other enthusiasts who would like to understand VLSI design flows. Coverage includes key concepts in CMOS digital design, design of DSP and communication blocks on FPGAs, ASIC front end and physical design, and analog and mixed signal design. The approach is designed to focus on practical implementation of key elements of the VLSI design process, in order to make the topic accessible to novices. The design concepts are demonstrated using software from Mathworks, Xilinx, Mentor Graphic

  11. First results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Anzivino, G.; Horisberger, R.; Hubbeling, L.; Hyams, B.; Parker, S.; Breakstone, A.; Litke, A.M.; Walker, J.T.; Bingefors, N.

    1986-01-01

    A 256-strip silicon detector with 25 μm strip pitch, connected to two 128-channel NMOS VLSI chips (Microplex), has been tested using straight-through tracks from a ruthenium beta source. The readout channels have a pitch of 47.5 μm. A single multiplexed output provides voltages proportional to the integrated charge from each strip. The most probable signal height from the beta traversals is approximately 14 times the rms noise in any single channel. (orig.)

  12. Initial beam test results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Adolphsen, C.; Litke, A.; Schwarz, A.

    1986-01-01

    Silicon detectors with 256 strips, having a pitch of 25 μm, and connected to two 128 channel NMOS VLSI chips each (Microplex), have been tested in relativistic charged particle beams at CERN and at the Stanford Linear Accelerator Center. The readout chips have an input channel pitch of 47.5 μm and a single multiplexed output which provides voltages proportional to the integrated charge from each strip. The most probable signal height from minimum ionizing tracks was 15 times the rms noise in any single channel. Two-track traversals with a separation of 100 μm were cleanly resolved

  13. VLSI electronics microstructure science

    CERN Document Server

    1982-01-01

    VLSI Electronics: Microstructure Science, Volume 4 reviews trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the silicon-on-insulator for VLSI and VHSIC, X-ray lithography, and transient response of electron transport in GaAs using the Monte Carlo method. The technology and manufacturing of high-density magnetic-bubble memories, metallic superlattices, challenge of education for VLSI, and impact of VLSI on medical signal processing are also elaborated. This text likewise covers the impact of VLSI t

  14. VLSI in medicine

    CERN Document Server

    Einspruch, Norman G

    1989-01-01

    VLSI Electronics Microstructure Science, Volume 17: VLSI in Medicine deals with the more important applications of VLSI in medical devices and instruments.This volume is comprised of 11 chapters. It begins with an article about medical electronics. The following three chapters cover diagnostic imaging, focusing on such medical devices as magnetic resonance imaging, neurometric analyzer, and ultrasound. Chapters 5, 6, and 7 present the impact of VLSI in cardiology. The electrocardiograph, implantable cardiac pacemaker, and the use of VLSI in Holter monitoring are detailed in these chapters. The

  15. VLSI electronics microstructure science

    CERN Document Server

    1981-01-01

    VLSI Electronics: Microstructure Science, Volume 3 evaluates trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the impact of VLSI on computer architectures; VLSI design and design aid requirements; and design, fabrication, and performance of CCD imagers. The approaches, potential, and progress of ultra-high-speed GaAs VLSI; computer modeling of MOSFETs; and numerical physics of micron-length and submicron-length semiconductor devices are also elaborated. This text likewise covers the optical linewi

  16. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  17. Plasma processing for VLSI

    CERN Document Server

    Einspruch, Norman G

    1984-01-01

    VLSI Electronics: Microstructure Science, Volume 8: Plasma Processing for VLSI (Very Large Scale Integration) discusses the utilization of plasmas for general semiconductor processing. It also includes expositions on advanced deposition of materials for metallization, lithographic methods that use plasmas as exposure sources and for multiple resist patterning, and device structures made possible by anisotropic etching.This volume is divided into four sections. It begins with the history of plasma processing, a discussion of some of the early developments and trends for VLSI. The second section

  18. Lithography for VLSI

    CERN Document Server

    Einspruch, Norman G

    1987-01-01

    VLSI Electronics Microstructure Science, Volume 16: Lithography for VLSI treats special topics from each branch of lithography, and also contains general discussion of some lithographic methods.This volume contains 8 chapters that discuss the various aspects of lithography. Chapters 1 and 2 are devoted to optical lithography. Chapter 3 covers electron lithography in general, and Chapter 4 discusses electron resist exposure modeling. Chapter 5 presents the fundamentals of ion-beam lithography. Mask/wafer alignment for x-ray proximity printing and for optical lithography is tackled in Chapter 6.

  19. Low-Complexity Hierarchical Mode Decision Algorithms Targeting VLSI Architecture Design for the H.264/AVC Video Encoder

    Directory of Open Access Journals (Sweden)

    Guilherme Corrêa

    2012-01-01

    Full Text Available In H.264/AVC, the encoding process can occur according to one of the 13 intraframe coding modes or according to one of the 8 available interframes block sizes, besides the SKIP mode. In the Joint Model reference software, the choice of the best mode is performed through exhaustive executions of the entire encoding process, which significantly increases the encoder's computational complexity and sometimes even forbids its use in real-time applications. Considering this context, this work proposes a set of heuristic algorithms targeting hardware architectures that lead to earlier selection of one encoding mode. The amount of repetitions of the encoding process is reduced by 47 times, at the cost of a relatively small cost in compression performance. When compared to other works, the fast hierarchical mode decision results are expressively more satisfactory in terms of computational complexity reduction, quality, and bit rate. The low-complexity mode decision architecture proposed is thus a very good option for real-time coding of high-resolution videos. The solution is especially interesting for embedded and mobile applications with support to multimedia systems, since it yields good compression rates and image quality with a very high reduction in the encoder complexity.

  20. Parallel VLSI Architecture

    Science.gov (United States)

    Truong, T. K.; Reed, I.; Yeh, C.; Shao, H.

    1985-01-01

    Fermat number transformation convolutes two digital data sequences. Very-large-scale integration (VLSI) applications, such as image and radar signal processing, X-ray reconstruction, and spectrum shaping, linear convolution of two digital data sequences of arbitrary lenghts accomplished using Fermat number transform (ENT).

  1. Las Vegas is better than determinism in VLSI and distributed computing

    DEFF Research Database (Denmark)

    Mehlhorn, Kurt; Schmidt, Erik Meineche

    1982-01-01

    In this paper we describe a new method for proving lower bounds on the complexity of VLSI - computations and more generally distributed computations. Lipton and Sedgewick observed that the crossing sequence arguments used to prove lower bounds in VLSI (or TM or distributed computing) apply to (ac...

  2. The VLSI handbook

    CERN Document Server

    Chen, Wai-Kai

    2007-01-01

    Written by a stellar international panel of expert contributors, this handbook remains the most up-to-date, reliable, and comprehensive source for real answers to practical problems. In addition to updated information in most chapters, this edition features several heavily revised and completely rewritten chapters, new chapters on such topics as CMOS fabrication and high-speed circuit design, heavily revised sections on testing of digital systems and design languages, and two entirely new sections on low-power electronics and VLSI signal processing. An updated compendium of references and othe

  3. UW VLSI chip tester

    Science.gov (United States)

    McKenzie, Neil

    1989-12-01

    We present a design for a low-cost, functional VLSI chip tester. It is based on the Apple MacIntosh II personal computer. It tests chips that have up to 128 pins. All pin drivers of the tester are bidirectional; each pin is programmed independently as an input or an output. The tester can test both static and dynamic chips. Rudimentary speed testing is provided. Chips are tested by executing C programs written by the user. A software library is provided for program development. Tests run under both the Mac Operating System and A/UX. The design is implemented using Xilinx Logic Cell Arrays. Price/performance tradeoffs are discussed.

  4. VLSI Technology for Cognitive Radio

    Science.gov (United States)

    VIJAYALAKSHMI, B.; SIDDAIAH, P.

    2017-08-01

    One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.

  5. VLSI signal processing technology

    CERN Document Server

    Swartzlander, Earl

    1994-01-01

    This book is the first in a set of forthcoming books focussed on state-of-the-art development in the VLSI Signal Processing area. It is a response to the tremendous research activities taking place in that field. These activities have been driven by two factors: the dramatic increase in demand for high speed signal processing, especially in consumer elec­ tronics, and the evolving microelectronic technologies. The available technology has always been one of the main factors in determining al­ gorithms, architectures, and design strategies to be followed. With every new technology, signal processing systems go through many changes in concepts, design methods, and implementation. The goal of this book is to introduce the reader to the main features of VLSI Signal Processing and the ongoing developments in this area. The focus of this book is on: • Current developments in Digital Signal Processing (DSP) pro­ cessors and architectures - several examples and case studies of existing DSP chips are discussed in...

  6. Nano lasers in photonic VLSI

    NARCIS (Netherlands)

    Hill, M.T.; Oei, Y.S.; Smit, M.K.

    2007-01-01

    We examine the use of micro and nano lasers to form digital photonic VLSI building blocks. Problems such as isolation and cascading of building blocks are addressed, and the potential of future nano lasers explored.

  7. Multi-valued LSI/VLSI logic design

    Science.gov (United States)

    Santrakul, K.

    A procedure for synthesizing any large complex logic system, such as LSI and VLSI integrated circuits is described. This scheme uses Multi-Valued Multi-plexers (MVMUX) as the basic building blocks and the tree as the structure of the circuit realization. Simple built-in test circuits included in the network (the main current), provide a thorough functional checking of the network at any time. In brief, four major contributions are made: (1) multi-valued Algorithmic State Machine (ASM) chart for describing an LSI/VLSI behavior; (2) a tree-structured multi-valued multiplexer network which can be obtained directly from an ASM chart; (3) a heuristic tree-structured synthesis method for realizing any combinational logic with minimal or nearly-minimal MVMUX; and (4) a hierarchical design of LSI/VLSI with built-in parallel testing capability.

  8. Complexity Results in Epistemic Planning

    DEFF Research Database (Denmark)

    Bolander, Thomas; Jensen, Martin Holm; Schwarzentruber, Francois

    2015-01-01

    Epistemic planning is a very expressive framework that extends automated planning by the incorporation of dynamic epistemic logic (DEL). We provide complexity results on the plan existence problem for multi-agent planning tasks, focusing on purely epistemic actions with propositional preconditions...

  9. VLSI implementations for image communications

    CERN Document Server

    Pirsch, P

    1993-01-01

    The past few years have seen a rapid growth in image processing and image communication technologies. New video services and multimedia applications are continuously being designed. Essential for all these applications are image and video compression techniques. The purpose of this book is to report on recent advances in VLSI architectures and their implementation for video signal processing applications with emphasis on video coding for bit rate reduction. Efficient VLSI implementation for video signal processing spans a broad range of disciplines involving algorithms, architectures, circuits

  10. NASA Space Engineering Research Center for VLSI systems design

    Science.gov (United States)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  11. Hybrid VLSI/QCA Architecture for Computing FFTs

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  12. High performance VLSI telemetry data systems

    Science.gov (United States)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  13. VLSI Architecture and Design

    OpenAIRE

    Johnsson, Lennart

    1980-01-01

    Integrated circuit technology is rapidly approaching a state where feature sizes of one micron or less are tractable. Chip sizes are increasing slowly. These two developments result in considerably increased complexity in chip design. The physical characteristics of integrated circuit technology are also changing. The cost of communication will be dominating making new architectures and algorithms both feasible and desirable. A large number of processors on a single chip will be possible....

  14. Application of evolutionary algorithms for multi-objective optimization in VLSI and embedded systems

    CERN Document Server

    2015-01-01

    This book describes how evolutionary algorithms (EA), including genetic algorithms (GA) and particle swarm optimization (PSO) can be utilized for solving multi-objective optimization problems in the area of embedded and VLSI system design. Many complex engineering optimization problems can be modelled as multi-objective formulations. This book provides an introduction to multi-objective optimization using meta-heuristic algorithms, GA and PSO, and how they can be applied to problems like hardware/software partitioning in embedded systems, circuit partitioning in VLSI, design of operational amplifiers in analog VLSI, design space exploration in high-level synthesis, delay fault testing in VLSI testing, and scheduling in heterogeneous distributed systems. It is shown how, in each case, the various aspects of the EA, namely its representation, and operators like crossover, mutation, etc. can be separately formulated to solve these problems. This book is intended for design engineers and researchers in the field ...

  15. Artificial immune system algorithm in VLSI circuit configuration

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.

  16. DPL/Daedalus design environment (for VLSI)

    Energy Technology Data Exchange (ETDEWEB)

    Batali, J; Mayle, N; Shrobe, H; Sussman, G; Weise, D

    1981-01-01

    The DPL/Daedalus design environment is an interactive VLSI design system implemented at the MIT Artificial Intelligence Laboratory. The system consists of several components: a layout language called DPL (for design procedure language); an interactive graphics facility (Daedalus); and several special purpose design procedures for constructing complex artifacts such as PLAs and microprocessor data paths. Coordinating all of these is a generalized property list data base which contains both the data representing circuits and the procedures for constructing them. The authors first review the nature of the data base and then turn to DPL and Daedalus, the two most common ways of entering information into the data base. The next two sections review the specialized procedures for constructing PLAs and data paths; the final section describes a tool for hierarchical node extraction. 5 references.

  17. Using Software Technology to Specify Abstract Interfaces in VLSI Design.

    Science.gov (United States)

    1985-01-01

    with the complexity lev- els inherent in VLSI design, in that they can capitalize on their foundations in discrete mathemat- ics and the theory of...basis, rather than globally. Such a partitioning of module semantics makes the specification easier to construct and verify intelectual !y; it also...access function definitions. A standard language improves executability characteristics by capitalizing on portable, optimized system software developed

  18. VLSI structures for track finding

    International Nuclear Information System (INIS)

    Dell'Orso, M.

    1989-01-01

    We discuss the architecture of a device based on the concept of associative memory designed to solve the track finding problem, typical of high energy physics experiments, in a time span of a few microseconds even for very high multiplicity events. This ''machine'' is implemented as a large array of custom VLSI chips. All the chips are equal and each of them stores a number of ''patterns''. All the patterns in all the chips are compared in parallel to the data coming from the detector while the detector is being read out. (orig.)

  19. The test of VLSI circuits

    Science.gov (United States)

    Baviere, Ph.

    Tests which have proven effective for evaluating VLSI circuits for space applications are described. It is recommended that circuits be examined after each manfacturing step to gain fast feedback on inadequacies in the production system. Data from failure modes which occur during operational lifetimes of circuits also permit redefinition of the manufacturing and quality control process to eliminate the defects identified. Other tests include determination of the operational envelope of the circuits, examination of the circuit response to controlled inputs, and the performance and functional speeds of ROM and RAM memories. Finally, it is desirable that all new circuits be designed with testing in mind.

  20. Multi-net optimization of VLSI interconnect

    CERN Document Server

    Moiseev, Konstantin; Wimer, Shmuel

    2015-01-01

    This book covers layout design and layout migration methodologies for optimizing multi-net wire structures in advanced VLSI interconnects. Scaling-dependent models for interconnect power, interconnect delay and crosstalk noise are covered in depth, and several design optimization problems are addressed, such as minimization of interconnect power under delay constraints, or design for minimal delay in wire bundles within a given routing area. A handy reference or a guide for design methodologies and layout automation techniques, this book provides a foundation for physical design challenges of interconnect in advanced integrated circuits.  • Describes the evolution of interconnect scaling and provides new techniques for layout migration and optimization, focusing on multi-net optimization; • Presents research results that provide a level of design optimization which does not exist in commercially-available design automation software tools; • Includes mathematical properties and conditions for optimal...

  1. Development methods for VLSI-processors

    International Nuclear Information System (INIS)

    Horninger, K.; Sandweg, G.

    1982-01-01

    The aim of this project, which was originally planed for 3 years, was the development of modern system and circuit concepts, for VLSI-processors having a 32 bit wide data path. The result of this first years work is the concept of a general purpose processor. This processor is not only logically but also physically (on the chip) divided into four functional units: a microprogrammable instruction unit, an execution unit in slice technique, a fully associative cache memory and an I/O unit. For the ALU of the execution unit circuits in PLA and slice techniques have been realized. On the basis of regularity, area consumption and achievable performance the slice technique has been prefered. The designs utilize selftesting circuitry. (orig.) [de

  2. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  3. VLSI Design of Trusted Virtual Sensors

    Directory of Open Access Journals (Sweden)

    Macarena C. Martínez-Rodríguez

    2018-01-01

    Full Text Available This work presents a Very Large Scale Integration (VLSI design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF based on a Static Random Access Memory (SRAM to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time.

  4. VLSI Design of Trusted Virtual Sensors.

    Science.gov (United States)

    Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada

    2018-01-25

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).

  5. Surface and interface effects in VLSI

    CERN Document Server

    Einspruch, Norman G

    1985-01-01

    VLSI Electronics Microstructure Science, Volume 10: Surface and Interface Effects in VLSI provides the advances made in the science of semiconductor surface and interface as they relate to electronics. This volume aims to provide a better understanding and control of surface and interface related properties. The book begins with an introductory chapter on the intimate link between interfaces and devices. The book is then divided into two parts. The first part covers the chemical and geometric structures of prototypical VLSI interfaces. Subjects detailed include, the technologically most import

  6. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  7. Drift chamber tracking with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-10-01

    We have tested a commercial analog VLSI neural network chip for finding in real time the intercept and slope of charged particles traversing a drift chamber. Voltages proportional to the drift times were input to the Intel ETANN chip and the outputs were recorded and later compared off line to conventional track fits. We will discuss the chamber and test setup, the chip specifications, and results of recent tests. We'll briefly discuss possible applications in high energy physics detector triggers

  8. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    International Nuclear Information System (INIS)

    Jian Haifang; Shi Yin

    2009-01-01

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  9. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    Energy Technology Data Exchange (ETDEWEB)

    Jian Haifang; Shi Yin, E-mail: jhf@semi.ac.c [Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083 (China)

    2009-07-15

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  10. VLSI Design with Alliance Free CAD Tools: an Implementation Example

    Directory of Open Access Journals (Sweden)

    Chávez-Bracamontes Ramón

    2015-07-01

    Full Text Available This paper presents the methodology used for a digital integrated circuit design that implements the communication protocol known as Serial Peripheral Interface, using the Alliance CAD System. The aim of this paper is to show how the work of VLSI design can be done by graduate and undergraduate students with minimal resources and experience. The physical design was sent to be fabricated using the CMOS AMI C5 process that features 0.5 micrometer in transistor size, sponsored by the MOSIS Educational Program. Tests were made on a platform that transfers data from inertial sensor measurements to the designed SPI chip, which in turn sends the data back on a parallel bus to a common microcontroller. The results show the efficiency of the employed methodology in VLSI design, as well as the feasibility of ICs manufacturing from school projects that have insufficient or no source of funding

  11. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  12. VLSI scaling methods and low power CMOS buffer circuit

    International Nuclear Information System (INIS)

    Sharma Vijay Kumar; Pattanaik Manisha

    2013-01-01

    Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)

  13. Ant System-Corner Insertion Sequence: An Efficient VLSI Hard Module Placer

    Directory of Open Access Journals (Sweden)

    HOO, C.-S.

    2013-02-01

    Full Text Available Placement is important in VLSI physical design as it determines the time-to-market and chip's reliability. In this paper, a new floorplan representation which couples with Ant System, namely Corner Insertion Sequence (CIS is proposed. Though CIS's search complexity is smaller than the state-of-the-art representation Corner Sequence (CS, CIS adopts a preset boundary on the placement and hence, leading to search bound similar to CS. This enables the previous unutilized corner edges to become viable. Also, the redundancy of CS representation is eliminated in CIS leads to a lower search complexity of CIS. Experimental results on Microelectronics Center of North Carolina (MCNC hard block benchmark circuits show that the proposed algorithm performs comparably in terms of area yet at least two times faster than CS.

  14. Fast-prototyping of VLSI

    International Nuclear Information System (INIS)

    Saucier, G.; Read, E.

    1987-01-01

    Fast-prototyping will be a reality in the very near future if both straightforward design methods and fast manufacturing facilities are available. This book focuses, first, on the motivation for fast-prototyping. Economic aspects and market considerations are analysed by European and Japanese companies. In the second chapter, new design methods are identified, mainly for full custom circuits. Of course, silicon compilers play a key role and the introduction of artificial intelligence techniques sheds a new light on the subject. At present, fast-prototyping on gate arrays or on standard cells is the most conventional technique and the third chapter updates the state-of-the art in this area. The fourth chapter concentrates specifically on the e-beam direct-writing for submicron IC technologies. In the fifth chapter, a strategic point in fast-prototyping, namely the test problem is addressed. The design for testability and the interface to the test equipment are mandatory to fulfill the test requirement for fast-prototyping. Finally, the last chapter deals with the subject of education when many people complain about the lack of use of fast-prototyping in higher education for VLSI

  15. Compact MOSFET models for VLSI design

    CERN Document Server

    Bhattacharyya, A B

    2009-01-01

    Practicing designers, students, and educators in the semiconductor field face an ever expanding portfolio of MOSFET models. In Compact MOSFET Models for VLSI Design , A.B. Bhattacharyya presents a unified perspective on the topic, allowing the practitioner to view and interpret device phenomena concurrently using different modeling strategies. Readers will learn to link device physics with model parameters, helping to close the gap between device understanding and its use for optimal circuit performance. Bhattacharyya also lays bare the core physical concepts that will drive the future of VLSI.

  16. Design of a Low-Power VLSI Macrocell for Nonlinear Adaptive Video Noise Reduction

    Directory of Open Access Journals (Sweden)

    Sergio Saponara

    2004-09-01

    Full Text Available A VLSI macrocell for edge-preserving video noise reduction is proposed in the paper. It is based on a nonlinear rational filter enhanced by a noise estimator for blind and dynamic adaptation of the filtering parameters to the input signal statistics. The VLSI filter features a modular architecture allowing the extension of both mask size and filtering directions. Both spatial and spatiotemporal algorithms are supported. Simulation results with monochrome test videos prove its efficiency for many noise distributions with PSNR improvements up to 3.8 dB with respect to a nonadaptive solution. The VLSI macrocell has been realized in a 0.18 μm CMOS technology using a standard-cells library; it allows for real-time processing of main video formats, up to 30 fps (frames per second 4CIF, with a power consumption in the order of few mW.

  17. Characterizations and computational complexity of systolic trellis automata

    Energy Technology Data Exchange (ETDEWEB)

    Ibarra, O H; Kim, S M

    1984-03-01

    Systolic trellis automata are simple models for VLSI. The authors characterize the computing power of these models in terms of turing machines. The characterizations are useful in proving new results as well as giving simpler proofs of known results. They also derive lower and upper bounds on the computational complexity of the models. 18 references.

  18. The GLUEchip: A custom VLSI chip for detectors readout and associative memories circuits

    International Nuclear Information System (INIS)

    Amendolia, S.R.; Galeotti, S.; Morsani, F.; Passuello, D.; Ristori, L.; Turini, N.

    1993-01-01

    An associative memory full-custom VLSI chip for pattern recognition has been designed and tested in the past years. It's the AMchip, that contains 128 patterns of 60 bits each. To expand the pattern capacity of an Associative Memory bank, the custom VLSI GLUEchip has been developed. The GLUEchip allows the interconnection of up to 16 AMchips or up to 16 GLUEchips: the resulting tree-like structure works like a single AMchip with an output pipelined structure and a pattern capacity increased by a factor 16 for each GLUEchip used

  19. A Knowledge Based Approach to VLSI CAD

    Science.gov (United States)

    1983-09-01

    Avail-and/or Dist ISpecial L| OI. SEICURITY CLASIIrCATION OP THIS IPA.lErllm S Daene." A KNOwLEDE BASED APPROACH TO VLSI CAD’ Louis L Steinberg and...major issues lies in building up and managing the knowledge base of oesign expertise. We expect that, as with many recent expert systems, in order to

  20. Electro-optic techniques for VLSI interconnect

    Science.gov (United States)

    Neff, J. A.

    1985-03-01

    A major limitation to achieving significant speed increases in very large scale integration (VLSI) lies in the metallic interconnects. They are costly not only from the charge transport standpoint but also from capacitive loading effects. The Defense Advanced Research Projects Agency, in pursuit of the fifth generation supercomputer, is investigating alternatives to the VLSI metallic interconnects, especially the use of optical techniques to transport the information either inter or intrachip. As the on chip performance of VLSI continues to improve via the scale down of the logic elements, the problems associated with transferring data off and onto the chip become more severe. The use of optical carriers to transfer the information within the computer is very appealing from several viewpoints. Besides the potential for gigabit propagation rates, the conversion from electronics to optics conveniently provides a decoupling of the various circuits from one another. Significant gains will also be realized in reducing cross talk between the metallic routings, and the interconnects need no longer be constrained to the plane of a thin film on the VLSI chip. In addition, optics can offer an increased programming flexibility for restructuring the interconnect network.

  1. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  2. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam; Ghoneim, Mohamed T.; El Boghdady, Nawal; Halawa, Sarah; Iskander, Sophinese M.; Anis, Mohab H.

    2011-01-01

    -designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result

  3. Results of complex treatment of Hodgkin's disease

    International Nuclear Information System (INIS)

    Kolygin, B.A.; Lebedev, S.V.; Borodina, A.F.; Kochurova, N.V.; Malinin, A.P.; Safonova, S.A.; Punanov, Yu.A.

    2000-01-01

    The evaluation of remote results of the complex treatment (polychemotherapy plus radiotherapy) for identification of the forecasting factor which may be applied, by stratification into the risk groups, is carried out. The group of 334 children up to 15 years with lymphogranulomatosis, subjected to not less than 2 cycles of inductive polychemotherapy and consolidating radiotherapy, is analyzed. The irradiation was conducted at the radiotherapeutic devices ROCUS LUE-25 and LUEV-15 M1. The complete remission after the treatment program was fixed by 95.1% of the patients the partial remission-by 6.3%; no effect was noted by 0.6% of the patients. Actuarial 10-year survival constituted 85.9%, the frequency of nonrelapsing flow - 74.3% [ru

  4. Harnessing VLSI System Design with EDA Tools

    CERN Document Server

    Kamat, Rajanish K; Gaikwad, Pawan K; Guhilot, Hansraj

    2012-01-01

    This book explores various dimensions of EDA technologies for achieving different goals in VLSI system design. Although the scope of EDA is very broad and comprises diversified hardware and software tools to accomplish different phases of VLSI system design, such as design, layout, simulation, testability, prototyping and implementation, this book focuses only on demystifying the code, a.k.a. firmware development and its implementation with FPGAs. Since there are a variety of languages for system design, this book covers various issues related to VHDL, Verilog and System C synergized with EDA tools, using a variety of case studies such as testability, verification and power consumption. * Covers aspects of VHDL, Verilog and Handel C in one text; * Enables designers to judge the appropriateness of each EDA tool for relevant applications; * Omits discussion of design platforms and focuses on design case studies; * Uses design case studies from diversified application domains such as network on chip, hospital on...

  5. VLSI 'smart' I/O module development

    Science.gov (United States)

    Kirk, Dan

    The developmental history, design, and operation of the MIL-STD-1553A/B discrete and serial module (DSM) for the U.S. Navy AN/AYK-14(V) avionics computer are described and illustrated with diagrams. The ongoing preplanned product improvement for the AN/AYK-14(V) includes five dual-redundant MIL-STD-1553 channels based on DSMs. The DSM is a front-end processor for transferring data to and from a common memory, sharing memory with a host processor to provide improved 'smart' input/output performance. Each DSM comprises three hardware sections: three VLSI-6000 semicustomized CMOS arrays, memory units to support the arrays, and buffers and resynchronization circuits. The DSM hardware module design, VLSI-6000 design tools, controlware and test software, and checkout procedures (using a hardware simulator) are characterized in detail.

  6. Parallel computation of nondeterministic algorithms in VLSI

    Energy Technology Data Exchange (ETDEWEB)

    Hortensius, P D

    1987-01-01

    This work examines parallel VLSI implementations of nondeterministic algorithms. It is demonstrated that conventional pseudorandom number generators are unsuitable for highly parallel applications. Efficient parallel pseudorandom sequence generation can be accomplished using certain classes of elementary one-dimensional cellular automata. The pseudorandom numbers appear in parallel on each clock cycle. Extensive study of the properties of these new pseudorandom number generators is made using standard empirical random number tests, cycle length tests, and implementation considerations. Furthermore, it is shown these particular cellular automata can form the basis of efficient VLSI architectures for computations involved in the Monte Carlo simulation of both the percolation and Ising models from statistical mechanics. Finally, a variation on a Built-In Self-Test technique based upon cellular automata is presented. These Cellular Automata-Logic-Block-Observation (CALBO) circuits improve upon conventional design for testability circuitry.

  7. Heavy ion tests on programmable VLSI

    International Nuclear Information System (INIS)

    Provost-Grellier, A.

    1989-11-01

    The radiation from space environment induces operation damages in onboard computers systems. The definition of a strategy, for the Very Large Scale Integrated Circuitry (VLSI) qualification and choice, is needed. The 'upset' phenomena is known to be the most critical integrated circuit radiation effect. The strategies for testing integrated circuits are reviewed. A method and a test device were developed and applied to space applications candidate circuits. Cyclotron, synchrotron and Californium source experiments were carried out [fr

  8. Applications of VLSI circuits to medical imaging

    International Nuclear Information System (INIS)

    O'Donnell, M.

    1988-01-01

    In this paper the application of advanced VLSI circuits to medical imaging is explored. The relationship of both general purpose signal processing chips and custom devices to medical imaging is discussed using examples of fabricated chips. In addition, advanced CAD tools for silicon compilation are presented. Devices built with these tools represent a possible alternative to custom devices and general purpose signal processors for the next generation of medical imaging systems

  9. VLSI architecture and design for the Fermat Number Transform implementation

    Energy Technology Data Exchange (ETDEWEB)

    Pajayakrit, A.

    1987-01-01

    A new technique of sectioning a pipelined transformer, using the Fermat Number Transform (FNT), is introduced. Also, a novel VLSI design which overcomes the problems of implementing FNTs, for use in fast convolution/correlation, is described. The design comprises one complete section of a pipelined transformer and may be programmed to function at any point in a forward or inverse pipeline, so allowing the construction of a pipelined convolver or correlator using identical chips, thus the favorable properties of the transform can be exploited. This overcomes the difficulty of fitting a complete pipeline onto one chip without resorting to the use of several different designs. The implementation of high-speed convolver/correlator using the VLSI chips has been successfully developed and tested. For impulse response lengths of up to 16 points the sampling rates of 0.5 MHz can be achieved. Finally, the filter speed performance using the FNT chips is compared to other designs and conclusions drawn on the merits of the FNT for this application. Also, the advantages and limitations of the FNT are analyzed, with respect to the more conventional FFT, and the results are provided.

  10. Design of two easily-testable VLSI array multipliers

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, J.; Shen, J.P.

    1983-01-01

    Array multipliers are well-suited to VLSI implementation because of the regularity in their iterative structure. However, most VLSI circuits are very difficult to test. This paper shows that, with appropriate cell design, array multipliers can be designed to be very easily testable. An array multiplier is called c-testable if all its adder cells can be exhaustively tested while requiring only a constant number of test patterns. The testability of two well-known array multiplier structures are studied. The conventional design of the carry-save array multipler is shown to be not c-testable. However, a modified design, using a modified adder cell, is generated and shown to be c-testable and requires only 16 test patterns. Similar results are obtained for the baugh-wooley two's complement array multiplier. A modified design of the baugh-wooley array multiplier is shown to be c-testable and requires 55 test patterns. The implementation of a practical c-testable 16*16 array multiplier is also presented. 10 references.

  11. An analog VLSI real time optical character recognition system based on a neural architecture

    International Nuclear Information System (INIS)

    Bo, G.; Caviglia, D.; Valle, M.

    1999-01-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system

  12. An analog VLSI real time optical character recognition system based on a neural architecture

    Energy Technology Data Exchange (ETDEWEB)

    Bo, G.; Caviglia, D.; Valle, M. [Genoa Univ. (Italy). Dip. of Biophysical and Electronic Engineering

    1999-03-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system.

  13. PERFORMANCE OF LEAKAGE POWER MINIMIZATION TECHNIQUE FOR CMOS VLSI TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    T. Tharaneeswaran

    2012-06-01

    Full Text Available Leakage power of CMOS VLSI Technology is a great concern. To reduce leakage power in CMOS circuits, a Leakage Power Minimiza-tion Technique (LPMT is implemented in this paper. Leakage cur-rents are monitored and compared. The Comparator kicks the charge pump to give body voltage (Vbody. Simulations of these circuits are done using TSMC 0.35µm technology with various operating temper-atures. Current steering Digital-to-Analog Converter (CSDAC is used as test core to validate the idea. The Test core (eg.8-bit CSDAC had power consumption of 347.63 mW. LPMT circuit alone consumes power of 6.3405 mW. This technique results in reduction of leakage power of 8-bit CSDAC by 5.51mW and increases the reliability of test core. Mentor Graphics ELDO and EZ-wave are used for simulations.

  14. Technology computer aided design simulation for VLSI MOSFET

    CERN Document Server

    Sarkar, Chandan Kumar

    2013-01-01

    Responding to recent developments and a growing VLSI circuit manufacturing market, Technology Computer Aided Design: Simulation for VLSI MOSFET examines advanced MOSFET processes and devices through TCAD numerical simulations. The book provides a balanced summary of TCAD and MOSFET basic concepts, equations, physics, and new technologies related to TCAD and MOSFET. A firm grasp of these concepts allows for the design of better models, thus streamlining the design process, saving time and money. This book places emphasis on the importance of modeling and simulations of VLSI MOS transistors and

  15. Wavelength-encoded OCDMA system using opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  16. Wavelength-encoded OCDMA system using opto-VLSI processors

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  17. VLSI Design of SVM-Based Seizure Detection System With On-Chip Learning Capability.

    Science.gov (United States)

    Feng, Lichen; Li, Zunchao; Wang, Yuanfa

    2018-02-01

    Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time-frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.

  18. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam

    2011-12-01

    Power dissipation poses a great challenge for VLSI designers. With the intense down-scaling of technology, the total power consumption of the chip is made up primarily of leakage power dissipation. This paper proposes combining a custom-designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result of implementing this novel power gating technique, a standby leakage power reduction of 99% and energy savings of 33.3% are achieved. Finally the possible effects of surge currents and ground bounce noise are studied. These findings allow longer operation times for battery-operated systems characterized by long standby periods. © 2011 IEEE.

  19. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  20. ORGANIZATION OF GRAPHIC INFORMATION FOR VIEWING THE MULTILAYER VLSI TOPOLOGY

    Directory of Open Access Journals (Sweden)

    V. I. Romanov

    2016-01-01

    Full Text Available One of the possible ways to reorganize of graphical information describing the set of topology layers of modern VLSI. The method is directed on the use in the conditions of the bounded size of video card memory. An additional effect, providing high performance of forming multi- image layout a multi-layer topology of modern VLSI, is achieved by preloading the required texture by means of auxiliary background process.

  1. Techniques for Computing the DFT Using the Residue Fermat Number Systems and VLSI

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.

    1985-01-01

    The integer complex multiplier and adder over the direct sum of two copies of a finite field is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed for the DFT can be reduced substantially over the previous approach. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  2. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  3. Synthesis algorithm of VLSI multipliers for ASIC

    Science.gov (United States)

    Chua, O. H.; Eldin, A. G.

    1993-01-01

    Multipliers are critical sub-blocks in ASIC design, especially for digital signal processing and communications applications. A flexible multiplier synthesis tool is developed which is capable of generating multiplier blocks for word size in the range of 4 to 256 bits. A comparison of existing multiplier algorithms is made in terms of speed, silicon area, and suitability for automated synthesis and verification of its VLSI implementation. The algorithm divides the range of supported word sizes into sub-ranges and provides each sub-range with a specific multiplier architecture for optimal speed and area. The algorithm of the synthesis tool and the multiplier architectures are presented. Circuit implementation and the automated synthesis methodology are discussed.

  4. PLA realizations for VLSI state machines

    Science.gov (United States)

    Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.

    1990-01-01

    A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.

  5. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    National Research Council Canada - National Science Library

    Horiuchi, Timothy K; Krishnaprasad, P. S

    2007-01-01

    .... This includes multiple efforts related to a VLSI-based echolocation system being developed in one of our laboratories from algorithm development, bat flight data analysis, to VLSI circuit design...

  6. CAPCAL, 3-D Capacitance Calculator for VLSI Purposes

    International Nuclear Information System (INIS)

    Seidl, Albert; Klose, Helmut; Svoboda, Mildos

    2004-01-01

    1 - Description of program or function: CAPCAL is devoted to the calculation of capacitances of three-dimensional wiring configurations are typically used in VLSI circuits. Due to analogies in the mathematical description also conductance and heat transport problems can be treated by CAPCAL. To handle the problem using CAPCAL same approximations have to be applied to the structure under investigation: - the overall geometry has to be confined to a finite domain by using symmetry-properties of the problem - Non-rectangular structures have to be simplified into an artwork of multiple boxes. 2 - Method of solution: The electrical field is described by the Laplace-equation. The differential equation is discretized by using the finite difference method. NEA-1327/01: The linear equation system is solved by using a combined ADI-multigrid method. NEA-1327/04: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. NEA-1327/05: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. 3 - Restrictions on the complexity of the problem: NEA-1327/01: Certain restrictions of use may arise from the dimensioning of arrays. Field lengths are defined via PARAMETER-statements which can easily by modified. If the geometry of the problem is defined such that Neumann boundaries are dominating the convergence of the iterative equation system solver is affected

  7. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  8. An electron undulating ring for VLSI lithography

    International Nuclear Information System (INIS)

    Tomimasu, T.; Mikado, T.; Noguchi, T.; Sugiyama, S.; Yamazaki, T.

    1985-01-01

    The development of the ETL storage ring ''TERAS'' as an undulating ring has been continued to achieve a wide area exposure of synchrotron radiation (SR) in VLSI lithography. Stable vertical and horizontal undulating motions of stored beams are demonstrated around a horizontal design orbit of TERAS, using two small steering magnets of which one is used for vertical undulating and another for horizontal one. Each steering magnet is inserted into one of the periodic configulation of guide field elements. As one of useful applications of undulaing electron beams, a vertically wide exposure of SR has been demonstrated in the SR lithography. The maximum vertical deviation from the design orbit nCcurs near the steering magnet. The maximum vertical tilt angle of the undulating beam near the nodes is about + or - 2mrad for a steering magnetic field of 50 gauss. Another proposal is for hith-intensity, uniform and wide exposure of SR from a wiggler installed in TERAS, using vertical and horizontal undulating motions of stored beams. A 1.4 m long permanent magnet wiggler has been installed for this purpose in this April

  9. Convolving optically addressed VLSI liquid crystal SLM

    Science.gov (United States)

    Jared, David A.; Stirk, Charles W.

    1994-03-01

    We designed, fabricated, and tested an optically addressed spatial light modulator (SLM) that performs a 3 X 3 kernel image convolution using ferroelectric liquid crystal on VLSI technology. The chip contains a 16 X 16 array of current-mirror-based convolvers with a fixed kernel for finding edges. The pixels are located on 75 micron centers, and the modulators are 20 microns on a side. The array successfully enhanced edges in illumination patterns. We developed a high-level simulation tool (CON) for analyzing the performance of convolving SLM designs. CON has a graphical interface and simulates SLM functions using SPICE-like device models. The user specifies the pixel function along with the device parameters and nonuniformities. We discovered through analysis, simulation and experiment that the operation of current-mirror-based convolver pixels is degraded at low light levels by the variation of transistor threshold voltages inherent to CMOS chips. To function acceptable, the test SLM required the input image to have an minimum irradiance of 10 (mu) W/cm2. The minimum required irradiance can be further reduced by adding a photodarlington near the photodetector or by increasing the size of the transistors used to calculate the convolution.

  10. Handbook of VLSI chip design and expert systems

    CERN Document Server

    Schwarz, A F

    1993-01-01

    Handbook of VLSI Chip Design and Expert Systems provides information pertinent to the fundamental aspects of expert systems, which provides a knowledge-based approach to problem solving. This book discusses the use of expert systems in every possible subtask of VLSI chip design as well as in the interrelations between the subtasks.Organized into nine chapters, this book begins with an overview of design automation, which can be identified as Computer-Aided Design of Circuits and Systems (CADCAS). This text then presents the progress in artificial intelligence, with emphasis on expert systems.

  11. VLSI micro- and nanophotonics science, technology, and applications

    CERN Document Server

    Lee, El-Hang; Razeghi, Manijeh; Jagadish, Chennupati

    2011-01-01

    Addressing the growing demand for larger capacity in information technology, VLSI Micro- and Nanophotonics: Science, Technology, and Applications explores issues of science and technology of micro/nano-scale photonics and integration for broad-scale and chip-scale Very Large Scale Integration photonics. This book is a game-changer in the sense that it is quite possibly the first to focus on ""VLSI Photonics"". Very little effort has been made to develop integration technologies for micro/nanoscale photonic devices and applications, so this reference is an important and necessary early-stage pe

  12. Adaptive WTA with an analog VLSI neuromorphic learning chip.

    Science.gov (United States)

    Häfliger, Philipp

    2007-03-01

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.

  13. Physico-topological methods of increasing stability of the VLSI circuit components to irradiation. Fiziko-topologhicheskie sposoby uluchsheniya radiatsionnoj stojkosti komponentov BIS

    Energy Technology Data Exchange (ETDEWEB)

    Pereshenkov, V S [MIFI, Moscow, (Russian Federation); Shishianu, F S; Rusanovskij, V I [S. Lazo KPI, Chisinau, (Moldova, Republic of)

    1992-01-01

    The paper presents the method used and the experimental results obtained for 8-bit microprocessor irradiated with [gamma]-rays and neutrons. The correlation between the electrical and technological parameters with the irradiation ones is revealed. The influence of leakage current between devices incorporated in VLSI circuits was studied. The obtained results create the possibility to determine the technological parameters necessary for designing the circuit able to work at predetermined doses. The necessary substrate doping concentration for isolation which eliminates the leakage current between devices prevents the VLSI circuit break down was determined. (Author).

  14. Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-06-01

    Full Text Available Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational resources on selected FPGA development platform. This is because many other complex algorithms of an automated video surveillance system also run on the same platform. Keeping this key requirement as main focus, a memory efficient VLSI architecture for real-time motion detection and its implementation on FPGA platform is presented in this paper. This is accomplished by proposing a new memory efficient motion detection scheme and designing its VLSI architecture. The complete real-time motion detection system using the proposed memory efficient architecture along with proper input/output interfaces is implemented on Xilinx ML510 (Virtex-5 FX130T FPGA development platform and is capable of operating at 154.55 MHz clock frequency. Memory requirement of the proposed architecture is reduced by 41% compared to the standard clustering based motion detection architecture. The new memory efficient system robustly and automatically detects motion in real-world scenarios (both for the static backgrounds and the pseudo-stationary backgrounds in real-time for standard PAL (720 × 576 size color video.

  15. Preliminary results for complexation of Pu with humic acid

    Energy Technology Data Exchange (ETDEWEB)

    Guczi, J.; Szabo, G. [National Research Inst. for Radiobiology and Radiohygi ene, Budapest, H-1775 (Hungary)]. e-mail: guczi@hp.osski.hu; Reiller, P. [CEA, CE Sac lay, Nuclear Energy Division/DPC/SERC, Laboratoire de Speciation des Radionuclei des et des Molecules, F-91191 Gif-sue-Yvette (France); Bulman, R.A. [Radiation Protection Division Division, Health Protection Agency, Chilton, Didcot (United Kingdom); Geckeis, H. [FZK - Inst. fuer Nukleare Entsorgung, Karlsruhe (Germany)

    2007-06-15

    Interaction of plutonium with humic substances has been investigated by a batch method use of the surface bound humic acid from perchlorate solutions at pH 4-6. By using these novel solid phases, complexing capacities and interaction constants are obtained. The complexing behavior of plutonium is analyzed. Pu(IV)-humate conditional stability constants have been evaluated from data obtained from these experiments by using non-linear regression of binding isotherms. The results have been interpreted in terms of complexes of 1:1 stoichiometry.

  16. Numerical analysis of electromigration in thin film VLSI interconnections

    NARCIS (Netherlands)

    Petrescu, V.; Mouthaan, A.J.; Schoenmaker, W.; Angelescu, S.; Vissarion, R.; Dima, G.; Wallinga, Hans; Profirescu, M.D.

    1995-01-01

    Due to the continuing downscaling of the dimensions in VLSI circuits, electromigration is becoming a serious reliability hazard. A software tool based on finite element analysis has been developed to solve the two partial differential equations of the two particle vacancy/imperfection model.

  17. Driving a car with custom-designed fuzzy inferencing VLSI chips and boards

    Science.gov (United States)

    Pin, Francois G.; Watanabe, Yutaka

    1993-01-01

    Vehicle control in a-priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise, incomplete, or unreliable data. For such systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips were developed to add a fuzzy inferencing capability to real-time control systems. All inferencing rules on a chip are processed in parallel, allowing execution of the entire rule base in about 30 microseconds, and therefore, making control of 'reflex-type' of motions envisionable. The use of these boards and the approach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation in a-priori unknown environments are first discussed. Then how the human-like navigation scheme implemented on one of the qualitative inferencing boards was installed on a test-bed platform to investigate two control modes for driving a car in a-priori unknown environments on the basis of sparse and imprecise sensor data is described. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulation results as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and some human

  18. Communication Complexity A treasure house of lower bounds

    Indian Academy of Sciences (India)

    Prahladh Harsha TIFR

    Applications. Data structures, VLSI design, time-space tradeoffs, circuit complexity, streaming, auctions, combinatorial optimization . . . Randomized Communication Complexity of INTER: Ω(n). ▷ There is no parallelizable monotone circuit that computes a matching in a given graph ...

  19. FILTRES: a 128 channels VLSI mixed front-end readout electronic development for microstrip detectors

    International Nuclear Information System (INIS)

    Anstotz, F.; Hu, Y.; Michel, J.; Sohler, J.L.; Lachartre, D.

    1998-01-01

    We present a VLSI digital-analog readout electronic chain for silicon microstrip detectors. The characteristics of this circuit have been optimized for the high resolution tracker of the CERN CMS experiment. This chip consists of 128 channels at 50 μm pitch. Each channel is composed by a charge amplifier, a CR-RC shaper, an analog memory, an analog processor, an output FIFO read out serially by a multiplexer. This chip has been processed in the radiation hard technology DMILL. This paper describes the architecture of the circuit and presents test results of the 128 channel full chain chip. (orig.)

  20. A multichip aVLSI system emulating orientation selectivity of primary visual cortical cells.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2005-07-01

    In this paper, we designed and fabricated a multichip neuromorphic analog very large scale integrated (aVLSI) system, which emulates the orientation selective response of the simple cell in the primary visual cortex. The system consists of a silicon retina and an orientation chip. An image, which is filtered by a concentric center-surround (CS) antagonistic receptive field of the silicon retina, is transferred to the orientation chip. The image transfer from the silicon retina to the orientation chip is carried out with analog signals. The orientation chip selectively aggregates multiple pixels of the silicon retina, mimicking the feedforward model proposed by Hubel and Wiesel. The chip provides the orientation-selective (OS) outputs which are tuned to 0 degrees, 60 degrees, and 120 degrees. The feed-forward aggregation reduces the fixed pattern noise that is due to the mismatch of the transistors in the orientation chip. The spatial properties of the orientation selective response were examined in terms of the adjustable parameters of the chip, i.e., the number of aggregated pixels and size of the receptive field of the silicon retina. The multichip aVLSI architecture used in the present study can be applied to implement higher order cells such as the complex cell of the primary visual cortex.

  1. A multi coding technique to reduce transition activity in VLSI circuits

    International Nuclear Information System (INIS)

    Vithyalakshmi, N.; Rajaram, M.

    2014-01-01

    Advances in VLSI technology have enabled the implementation of complex digital circuits in a single chip, reducing system size and power consumption. In deep submicron low power CMOS VLSI design, the main cause of energy dissipation is charging and discharging of internal node capacitances due to transition activity. Transition activity is one of the major factors that also affect the dynamic power dissipation. This paper proposes power reduction analyzed through algorithm and logic circuit levels. In algorithm level the key aspect of reducing power dissipation is by minimizing transition activity and is achieved by introducing a data coding technique. So a novel multi coding technique is introduced to improve the efficiency of transition activity up to 52.3% on the bus lines, which will automatically reduce the dynamic power dissipation. In addition, 1 bit full adders are introduced in the Hamming distance estimator block, which reduces the device count. This coding method is implemented using Verilog HDL. The overall performance is analyzed by using Modelsim and Xilinx Tools. In total 38.2% power saving capability is achieved compared to other existing methods. (semiconductor technology)

  2. Complex decision-making: initial results of an empirical study

    Directory of Open Access Journals (Sweden)

    Pier Luigi Baldi

    2011-09-01

    Full Text Available A brief survey of key literature on emotions and decision-making introduces an empirical study of a group of university students exploring the effects of decision-making complexity on error risk. The results clearly show that decision-making under stress in the experimental group produces significantly more errors than in the stress-free control group.

  3. Complex decision-making: initial results of an empirical study

    OpenAIRE

    Pier Luigi Baldi

    2011-01-01

    A brief survey of key literature on emotions and decision-making introduces an empirical study of a group of university students exploring the effects of decision-making complexity on error risk. The results clearly show that decision-making under stress in the experimental group produces significantly more errors than in the stress-free control group.

  4. Best Proximity Point Results in Complex Valued Metric Spaces

    Directory of Open Access Journals (Sweden)

    Binayak S. Choudhury

    2014-01-01

    complex valued metric spaces. We treat the problem as that of finding the global optimal solution of a fixed point equation although the exact solution does not in general exist. We also define and use the concept of P-property in such spaces. Our results are illustrated with examples.

  5. Use of complex electronic equipment within radiative areas of PWR power plants: feability study

    International Nuclear Information System (INIS)

    Fremont, P.; Carquet, M.

    1988-01-01

    EDF has undertaken a study in order to evaluate the technical and economical feasibility of using complex electronic equipment within radiative areas of PWR power plants. This study lies on tests of VLSI components (Random Access Memories) under gamma rays irradiations, which aims are to evaluate the radiation dose that they can withstand and to develop a selection method. 125 rad/h and 16 rad/h tests results are given [fr

  6. Pursuit, Avoidance, and Cohesion in Flight: Multi-Purpose Control Laws and Neuromorphic VLSI

    Science.gov (United States)

    2010-10-01

    spatial navigation in mammals. We have designed, fabricated, and are now testing a neuromorphic VLSI chip that implements a spike-based, attractor...Control Laws and Neuromorphic VLSI 5a. CONTRACT NUMBER 070402-7705 5b. GRANT NUMBER FA9550-07-1-0446 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...implementations (custom Neuromorphic VLSI and robotics) we will apply important practical constraints that can lead to deeper insight into how and why efficient

  7. VLSI System Implementation of 200 MHz, 8-bit, 90nm CMOS Arithmetic and Logic Unit (ALU Processor Controller

    Directory of Open Access Journals (Sweden)

    Fazal NOORBASHA

    2012-08-01

    Full Text Available In this present study includes the Very Large Scale Integration (VLSI system implementation of 200MHz, 8-bit, 90nm Complementary Metal Oxide Semiconductor (CMOS Arithmetic and Logic Unit (ALU processor control with logic gate design style and 0.12µm six metal 90nm CMOS fabrication technology. The system blocks and the behaviour are defined and the logical design is implemented in gate level in the design phase. Then, the logic circuits are simulated and the subunits are converted in to 90nm CMOS layout. Finally, in order to construct the VLSI system these units are placed in the floor plan and simulated with analog and digital, logic and switch level simulators. The results of the simulations indicates that the VLSI system can control different instructions which can divided into sub groups: transfer instructions, arithmetic and logic instructions, rotate and shift instructions, branch instructions, input/output instructions, control instructions. The data bus of the system is 16-bit. It runs at 200MHz, and operating power is 1.2V. In this paper, the parametric analysis of the system, the design steps and obtained results are explained.

  8. Complex VLSI Feature Comparison for Commercial Microelectronics Verification

    Science.gov (United States)

    2014-03-27

    corruption , tampering and counterfeiting due to these technologies’ extremely sensitive purposes. Adversarial intervention in the IC design and...counterfeiting in its motive: whereas counterfeiting is usually motivated by greed , tampering is an act of espionage or sabotage [26]. Finally, poor

  9. Advanced symbolic analysis for VLSI systems methods and applications

    CERN Document Server

    Shi, Guoyong; Tlelo Cuautle, Esteban

    2014-01-01

    This book provides comprehensive coverage of the recent advances in symbolic analysis techniques for design automation of nanometer VLSI systems. The presentation is organized in parts of fundamentals, basic implementation methods and applications for VLSI design. Topics emphasized include  statistical timing and crosstalk analysis, statistical and parallel analysis, performance bound analysis and behavioral modeling for analog integrated circuits . Among the recent advances, the Binary Decision Diagram (BDD) based approaches are studied in depth. The BDD-based hierarchical symbolic analysis approaches, have essentially broken the analog circuit size barrier. In particular, this book   • Provides an overview of classical symbolic analysis methods and a comprehensive presentation on the modern  BDD-based symbolic analysis techniques; • Describes detailed implementation strategies for BDD-based algorithms, including the principles of zero-suppression, variable ordering and canonical reduction; • Int...

  10. Trace-based post-silicon validation for VLSI circuits

    CERN Document Server

    Liu, Xiao

    2014-01-01

    This book first provides a comprehensive coverage of state-of-the-art validation solutions based on real-time signal tracing to guarantee the correctness of VLSI circuits.  The authors discuss several key challenges in post-silicon validation and provide automated solutions that are systematic and cost-effective.  A series of automatic tracing solutions and innovative design for debug (DfD) techniques are described, including techniques for trace signal selection for enhancing visibility of functional errors, a multiplexed signal tracing strategy for improving functional error detection, a tracing solution for debugging electrical errors, an interconnection fabric for increasing data bandwidth and supporting multi-core debug, an interconnection fabric design and optimization technique to increase transfer flexibility and a DfD design and associated tracing solution for improving debug efficiency and expanding tracing window. The solutions presented in this book improve the validation quality of VLSI circuit...

  11. Emerging Applications for High K Materials in VLSI Technology

    Science.gov (United States)

    Clark, Robert D.

    2014-01-01

    The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI) manufacturing for leading edge Dynamic Random Access Memory (DRAM) and Complementary Metal Oxide Semiconductor (CMOS) applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM) diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD) is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing. PMID:28788599

  12. Emerging Applications for High K Materials in VLSI Technology

    Directory of Open Access Journals (Sweden)

    Robert D. Clark

    2014-04-01

    Full Text Available The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI manufacturing for leading edge Dynamic Random Access Memory (DRAM and Complementary Metal Oxide Semiconductor (CMOS applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing.

  13. A VLSI image processor via pseudo-mersenne transforms

    International Nuclear Information System (INIS)

    Sei, W.J.; Jagadeesh, J.M.

    1986-01-01

    The computational burden on image processing in medical fields where a large amount of information must be processed quickly and accurately has led to consideration of special-purpose image processor chip design for some time. The very large scale integration (VLSI) resolution has made it cost-effective and feasible to consider the design of special purpose chips for medical imaging fields. This paper describes a VLSI CMOS chip suitable for parallel implementation of image processing algorithms and cyclic convolutions by using Pseudo-Mersenne Number Transform (PMNT). The main advantages of the PMNT over the Fast Fourier Transform (FFT) are: (1) no multiplications are required; (2) integer arithmetic is used. The design and development of this processor, which operates on 32-point convolution or 5 x 5 window image, are described

  14. Embedded Processor Based Automatic Temperature Control of VLSI Chips

    Directory of Open Access Journals (Sweden)

    Narasimha Murthy Yayavaram

    2009-01-01

    Full Text Available This paper presents embedded processor based automatic temperature control of VLSI chips, using temperature sensor LM35 and ARM processor LPC2378. Due to the very high packing density, VLSI chips get heated very soon and if not cooled properly, the performance is very much affected. In the present work, the sensor which is kept very near proximity to the IC will sense the temperature and the speed of the fan arranged near to the IC is controlled based on the PWM signal generated by the ARM processor. A buzzer is also provided with the hardware, to indicate either the failure of the fan or overheating of the IC. The entire process is achieved by developing a suitable embedded C program.

  15. The AMchip: A VLSI associative memory for track finding

    International Nuclear Information System (INIS)

    Morsani, F.; Galeotti, S.; Passuello, D.; Amendolia, S.R.; Ristori, L.; Turini, N.

    1992-01-01

    An associative memory to be used for super-fast track finding in future high energy physics experiments, has been implemented on silicon as a full-custom CMOS VLSI chip (the AMchip). The first prototype has been designed and successfully tested at INFN in Pisa. It is implemented in 1.6 μm, double metal, silicon gate CMOS technology and contains about 140 000 MOS transistors on a 1x1 cm 2 silicon chip. (orig.)

  16. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    Science.gov (United States)

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  17. An engineering methodology for implementing and testing VLSI (Very Large Scale Integrated) circuits

    Science.gov (United States)

    Corliss, Walter F., II

    1989-03-01

    The engineering methodology for producing a fully tested VLSI chip from a design layout is presented. A 16-bit correlator, NPS CORN88, that was previously designed, was used as a vehicle to demonstrate this methodology. The study of the design and simulation tools, MAGIC and MOSSIM II, was the focus of the design and validation process. The design was then implemented and the chip was fabricated by MOSIS. This fabricated chip was then used to develop a testing methodology for using the digital test facilities at NPS. NPS CORN88 was the first full custom VLSI chip, designed at NPS, to be tested with the NPS digital analysis system, Tektronix DAS 9100 series tester. The capabilities and limitations of these test facilities are examined. NPS CORN88 test results are included to demonstrate the capabilities of the digital test system. A translator, MOS2DAS, was developed to convert the MOSSIM II simulation program to the input files required by the DAS 9100 device verification software, 91DVS. Finally, a tutorial for using the digital test facilities, including the DAS 9100 and associated support equipments, is included as an appendix.

  18. Analog VLSI Models of Range-Tuned Neurons in the Bat Echolocation System

    Directory of Open Access Journals (Sweden)

    Horiuchi Timothy

    2003-01-01

    Full Text Available Bat echolocation is a fascinating topic of research for both neuroscientists and engineers, due to the complex and extremely time-constrained nature of the problem and its potential for application to engineered systems. In the bat's brainstem and midbrain exist neural circuits that are sensitive to the specific difference in time between the outgoing sonar vocalization and the returning echo. While some of the details of the neural mechanisms are known to be species-specific, a basic model of reafference-triggered, postinhibitory rebound timing is reasonably well supported by available data. We have designed low-power, analog VLSI circuits to mimic this mechanism and have demonstrated range-dependent outputs for use in a real-time sonar system. These circuits are being used to implement range-dependent vocalization amplitude, vocalization rate, and closest target isolation.

  19. vPELS: An E-Learning Social Environment for VLSI Design with Content Security Using DRM

    Science.gov (United States)

    Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn

    2014-01-01

    This article provides a proposal for personal e-learning system (vPELS [where "v" stands for VLSI: very large scale integrated circuit])) architecture in the context of social network environment for VLSI Design. The main objective of vPELS is to develop individual skills on a specific subject--say, VLSI--and share resources with peers.…

  20. Methodology and Results of Mathematical Modelling of Complex Technological Processes

    Science.gov (United States)

    Mokrova, Nataliya V.

    2018-03-01

    The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.

  1. The effect of query complexity on Web searching results

    Directory of Open Access Journals (Sweden)

    B.J. Jansen

    2000-01-01

    Full Text Available This paper presents findings from a study of the effects of query structure on retrieval by Web search services. Fifteen queries were selected from the transaction log of a major Web search service in simple query form with no advanced operators (e.g., Boolean operators, phrase operators, etc. and submitted to 5 major search engines - Alta Vista, Excite, FAST Search, Infoseek, and Northern Light. The results from these queries became the baseline data. The original 15 queries were then modified using the various search operators supported by each of the 5 search engines for a total of 210 queries. Each of these 210 queries was also submitted to the applicable search service. The results obtained were then compared to the baseline results. A total of 2,768 search results were returned by the set of all queries. In general, increasing the complexity of the queries had little effect on the results with a greater than 70% overlap in results, on average. Implications for the design of Web search services and directions for future research are discussed.

  2. A novel configurable VLSI architecture design of window-based image processing method

    Science.gov (United States)

    Zhao, Hui; Sang, Hongshi; Shen, Xubang

    2018-03-01

    Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure

  3. Point DCT VLSI Architecture for Emerging HEVC Standard

    OpenAIRE

    Ahmed, Ashfaq; Shahid, Muhammad Usman; Rehman, Ata ur

    2012-01-01

    This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4 × 4 up to 3 2 × 3 2 , the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into ...

  4. Formal verification an essential toolkit for modern VLSI design

    CERN Document Server

    Seligman, Erik; Kumar, M V Achutha Kiran

    2015-01-01

    Formal Verification: An Essential Toolkit for Modern VLSI Design presents practical approaches for design and validation, with hands-on advice for working engineers integrating these techniques into their work. Building on a basic knowledge of System Verilog, this book demystifies FV and presents the practical applications that are bringing it into mainstream design and validation processes at Intel and other companies. The text prepares readers to effectively introduce FV in their organization and deploy FV techniques to increase design and validation productivity. Presents formal verific

  5. Percutaneous debridement of complex pyogenic liver abscesses: technique and results

    International Nuclear Information System (INIS)

    Morettin, L.B.

    1992-01-01

    The author's approach and technique in the treatment of complex liver abscesses that persisted or recurred following percutaneous drainage are described. Six patients were treated by percutaneous debridement utilizing an instrument specifically constructed for that purpose. Four patients were chronically ill but stable. Two patients were septic, hypotensive and considered life threatened. All patients had primary pyogenic abscesses. Four had demonstrated mixed bacterial flora consisting of E. coli, Klebsiella, Proteus and gram-positive cocci and two were caused by E. coli only. In all cases a contrast-enhanced CT of the abdomen revealed multiloculated or septated abscesses containing large central debris and peripheral shell or halo of compromised hepatic parenchyma. Debridement was successful in all cases resulting in complete healing in 4-12 days. Follow-up for periods of between 1 and 4.5 years revealed no recurrences. Three cases of infected tumors of the liver were referred for treatment. CT findings in these cases demonstrated a well-developed external capsule and internal septations and the absence of a surrounding halo of compromised parenchyma distinguishes them from primary abscesses. This preliminary experience allows the conclusion that percutaneous debridement of pyogenic liver abscesses can be safely performed, can be curative in selected patients with chronic abscesses and may be life-safing in critically ill and life-threatened patients. (orig.)

  6. Operation of a Fast-RICH Prototype with VLSI readout electronics

    Energy Technology Data Exchange (ETDEWEB)

    Guyonnet, J.L. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Arnold, R. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Jobez, J.P. (Coll. de France, 75 - Paris (France)); Seguinot, J. (Coll. de France, 75 - Paris (France)); Ypsilantis, T. (Coll. de France, 75 - Paris (France)); Chesi, E. (CERN / ECP Div., Geneve (Switzerland)); Racz, A. (CERN / ECP Div., Geneve (Switzerland)); Egger, J. (Paul Scherrer Inst., Villigen (Switzerland)); Gabathuler, K. (Paul Scherrer Inst., Villigen (Switzerland)); Joram, C. (Karlsruhe Univ. (Germany)); Adachi, I. (KEK, Tsukuba (Japan)); Enomoto, R. (KEK, Tsukuba (Japan)); Sumiyoshi, T. (KEK, Tsukuba (Japan))

    1994-04-01

    We discuss the first test results, obtained with cosmic rays, of a full-scale Fast-RICH Prototype with proximity-focused 10 mm thick LiF (CaF[sub 2]) solid radiators, TEA as photosensor in CH[sub 4], and readout of 12 x 10[sup 3] cathode pads (5.334 x 6.604 mm[sup 2]) using dedicated VLSI electronics we have developed. The number of detected photoelectrons is 7.7 (6.9) for the CaF[sub 2] (LiF) radiator, very near to the expected values 6.4 (7.5) from Monte Carlo simulations. The single-photon Cherenkov angle resolution [sigma][sub [theta

  7. Opto-VLSI-based reconfigurable free-space optical interconnects architecture

    DEFF Research Database (Denmark)

    Aljada, Muhsen; Alameh, Kamal; Chung, Il-Sug

    2007-01-01

    is the Opto-VLSI processor which can be driven by digital phase steering and multicasting holograms that reconfigure the optical interconnects between the input and output ports. The optical interconnects architecture is experimentally demonstrated at 2.5 Gbps using high-speed 1×3 VCSEL array and 1......×3 photoreceiver array in conjunction with two 1×4096 pixel Opto-VLSI processors. The minimisation of the crosstalk between the output ports is achieved by appropriately aligning the VCSEL and PD elements with respect to the Opto-VLSI processors and driving the latter with optimal steering phase holograms....

  8. Assimilation of Biophysical Neuronal Dynamics in Neuromorphic VLSI.

    Science.gov (United States)

    Wang, Jun; Breen, Daniel; Akinin, Abraham; Broccard, Frederic; Abarbanel, Henry D I; Cauwenberghs, Gert

    2017-12-01

    Representing the biophysics of neuronal dynamics and behavior offers a principled analysis-by-synthesis approach toward understanding mechanisms of nervous system functions. We report on a set of procedures assimilating and emulating neurobiological data on a neuromorphic very large scale integrated (VLSI) circuit. The analog VLSI chip, NeuroDyn, features 384 digitally programmable parameters specifying for 4 generalized Hodgkin-Huxley neurons coupled through 12 conductance-based chemical synapses. The parameters also describe reversal potentials, maximal conductances, and spline regressed kinetic functions for ion channel gating variables. In one set of experiments, we assimilated membrane potential recorded from one of the neurons on the chip to the model structure upon which NeuroDyn was designed using the known current input sequence. We arrived at the programmed parameters except for model errors due to analog imperfections in the chip fabrication. In a related set of experiments, we replicated songbird individual neuron dynamics on NeuroDyn by estimating and configuring parameters extracted using data assimilation from intracellular neural recordings. Faithful emulation of detailed biophysical neural dynamics will enable the use of NeuroDyn as a tool to probe electrical and molecular properties of functional neural circuits. Neuroscience applications include studying the relationship between molecular properties of neurons and the emergence of different spike patterns or different brain behaviors. Clinical applications include studying and predicting effects of neuromodulators or neurodegenerative diseases on ion channel kinetics.

  9. Development of Radhard VLSI electronics for SSC calorimeters

    International Nuclear Information System (INIS)

    Dawson, J.W.; Nodulman, L.J.

    1989-01-01

    A new program of development of integrated electronics for liquid argon calorimeters in the SSC detector environment is being started at Argonne National Laboratory. Scientists from Brookhaven National Laboratory and Vanderbilt University together with an industrial participants are expected to collaborate in this work. Interaction rates, segmentation, and the radiation environment dictate that front-end electronics of SSC calorimeters must be implemented in the form of highly integrated, radhard, analog, low noise, VLSI custom monolithic devices. Important considerations are power dissipation, choice of functions integrated on the front-end chips, and cabling requirements. An extensive level of expertise in radhard electronics exists within the industrial community, and a primary objective of this work is to bring that expertise to bear on the problems of SSC detector design. Radiation hardness measurements and requirements as well as calorimeter design will be primarily the responsibility of Argonne scientists and our Brookhaven and Vanderbilt colleagues. Radhard VLSI design and fabrication will be primarily the industrial participant's responsibility. The rapid-cycling synchrotron at Argonne will be used for radiation damage studies involving response to neutrons and charged particles, while damage from gammas will be investigated at Brookhaven. 10 refs., 6 figs., 2 tabs

  10. RESULTS OF APPLYING POLYVITAMIN COMPLEX FOR CHILDREN WITH ATOPIC DERMATITIS

    Directory of Open Access Journals (Sweden)

    N.A. Ivanova

    2007-01-01

    Full Text Available The article presents findings of applying vitamin-and-mineral complex (VMC for children frequently suffering from diseases and children with atopic dermatitis. It shows that usage of VMC within a complex therapy promotes regression of subnormal vitamin provision symptoms, as well as symptoms of the core disease. This happens against heightened vitamin content in child's organism — which was proven with the test of A and E vitamins content in blood. The research has demonstrated a quite good tolerance of VMC by children suffering from atopic dermatitis.Key words: children frequently suffering from diseases, atopic dermatitis, vitamins, treatment.

  11. Frequency-dependent complex modulus of the uterus: preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Kiss, Miklos Z [Department of Medical Physics, University of Wisconsin, Madison, WI 53706 (United States); Hobson, Maritza A [Department of Medical Physics, University of Wisconsin, Madison, WI 53706 (United States); Varghese, Tomy [Department of Medical Physics, University of Wisconsin, Madison, WI 53706 (United States); Harter, Josephine [Department of Surgical Pathology, University of Wisconsin, Madison, WI 53706 (United States); Kliewer, Mark A [Department of Radiology, University of Wisconsin, Madison, WI 53706 (United States); Hartenbach, Ellen M [Department of Obstetrics and Gynecology, University of Wisconsin, Madison, WI 53706 (United States); Zagzebski, James A [Department of Medical Physics, University of Wisconsin, Madison, WI 53706 (United States)

    2006-08-07

    The frequency-dependent complex moduli of human uterine tissue have been characterized. Quantification of the modulus is required for developing uterine ultrasound elastography as a viable imaging modality for diagnosing and monitoring causes for abnormal uterine bleeding and enlargement, as well assessing the integrity of uterine and cervical tissue. The complex modulus was measured in samples from hysterectomies of 24 patients ranging in age from 31 to 79 years. Measurements were done under small compressions of either 1 or 2%, at low pre-compression values (either 1 or 2%), and over a frequency range of 0.1-100 Hz. Modulus values of cervical tissue monotonically increased from approximately 30-90 kPa over the frequency range. Normal uterine tissue possessed modulus values over the same range, while leiomyomas, or uterine fibroids, exhibited values ranging from approximately 60-220 kPa.

  12. Frequency-dependent complex modulus of the uterus: preliminary results

    International Nuclear Information System (INIS)

    Kiss, Miklos Z; Hobson, Maritza A; Varghese, Tomy; Harter, Josephine; Kliewer, Mark A; Hartenbach, Ellen M; Zagzebski, James A

    2006-01-01

    The frequency-dependent complex moduli of human uterine tissue have been characterized. Quantification of the modulus is required for developing uterine ultrasound elastography as a viable imaging modality for diagnosing and monitoring causes for abnormal uterine bleeding and enlargement, as well assessing the integrity of uterine and cervical tissue. The complex modulus was measured in samples from hysterectomies of 24 patients ranging in age from 31 to 79 years. Measurements were done under small compressions of either 1 or 2%, at low pre-compression values (either 1 or 2%), and over a frequency range of 0.1-100 Hz. Modulus values of cervical tissue monotonically increased from approximately 30-90 kPa over the frequency range. Normal uterine tissue possessed modulus values over the same range, while leiomyomas, or uterine fibroids, exhibited values ranging from approximately 60-220 kPa

  13. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  14. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    Science.gov (United States)

    2007-03-31

    IFinal 03/01/04 - 02/28/07 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Neuromorphic VLSI-based Bat Echolocation for Micro-aerial 5b.GRANTNUMBER Vehicle...uncovered interesting new issues in our choice for representing the intensity of signals. We have just finished testing the first chip version of an echo...timing-based algorithm (’openspace’) for sonar-guided navigation amidst multiple obstacles. 15. SUBJECT TERMS Neuromorphic VLSI, bat echolocation

  15. VLSI Architectures for the Multiplication of Integers Modulo a Fermat Number

    Science.gov (United States)

    Chang, J. J.; Truong, T. K.; Reed, I. S.; Hsu, I. S.

    1984-01-01

    Multiplication is central in the implementation of Fermat number transforms and other residue number algorithms. There is need for a good multiplication algorithm that can be realized easily on a very large scale integration (VLSI) chip. The Leibowitz multiplier is modified to realize multiplication in the ring of integers modulo a Fermat number. This new algorithm requires only a sequence of cyclic shifts and additions. The designs developed for this new multiplier are regular, simple, expandable, and, therefore, suitable for VLSI implementation.

  16. Some Results on the Graph Theory for Complex Neutrosophic Sets

    Directory of Open Access Journals (Sweden)

    Shio Gai Quek

    2018-05-01

    Full Text Available Fuzzy graph theory plays an important role in the study of the symmetry and asymmetry properties of fuzzy graphs. With this in mind, in this paper, we introduce new neutrosophic graphs called complex neutrosophic graphs of type 1 (abbr. CNG1. We then present a matrix representation for it and study some properties of this new concept. The concept of CNG1 is an extension of the generalized fuzzy graphs of type 1 (GFG1 and generalized single-valued neutrosophic graphs of type 1 (GSVNG1. The utility of the CNG1 introduced here are applied to a multi-attribute decision making problem related to Internet server selection.

  17. VLSI and system architecture-the new development of system 5G

    Energy Technology Data Exchange (ETDEWEB)

    Sakamura, K.; Sekino, A.; Kodaka, T.; Uehara, T.; Aiso, H.

    1982-01-01

    A research and development proposal is presented for VLSI CAD systems and for a hardware environment called system 5G on which the VLSI CAD systems run. The proposed CAD systems use a hierarchically organized design language to enable design of anything from basic architectures of VLSI to VLSI mask patterns in a uniform manner. The cad systems will eventually become intelligent cad systems that acquire design knowledge and perform automatic design of VLSI chips when the characteristic requirements of VLSI chip is given. System 5G will consist of superinference machines and the 5G communication network. The superinference machine will be built based on a functionally distributed architecture connecting inferommunication network. The superinference machine will be built based on a functionally distributed architecture connecting inference machines and relational data base machines via a high-speed local network. The transfer rate of the local network will be 100 mbps at the first stage of the project and will be improved to 1 gbps. Remote access to the superinference machine will be possible through the 5G communication network. Access to system 5G will use the 5G network architecture protocol. The users will access the system 5G using standardized 5G personal computers. 5G personal logic programming stations, very high intelligent terminals providing an instruction set that supports predicate logic and input/output facilities for audio and graphical information.

  18. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    Science.gov (United States)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  19. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  20. Point DCT VLSI Architecture for Emerging HEVC Standard

    Directory of Open Access Journals (Sweden)

    Ashfaq Ahmed

    2012-01-01

    Full Text Available This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4×4 up to 32×32, the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into sparse submatrices in order to reduce the multiplications. Finally, multiplications are completely eliminated using the lifting scheme. The proposed architecture sustains real-time processing of 1080P HD video codec running at 150 MHz.

  1. VLSI-based video event triggering for image data compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  2. Carbon nanotube based VLSI interconnects analysis and design

    CERN Document Server

    Kaushik, Brajesh Kumar

    2015-01-01

    The brief primarily focuses on the performance analysis of CNT based interconnects in current research scenario. Different CNT structures are modeled on the basis of transmission line theory. Performance comparison for different CNT structures illustrates that CNTs are more promising than Cu or other materials used in global VLSI interconnects. The brief is organized into five chapters which mainly discuss: (1) an overview of current research scenario and basics of interconnects; (2) unique crystal structures and the basics of physical properties of CNTs, and the production, purification and applications of CNTs; (3) a brief technical review, the geometry and equivalent RLC parameters for different single and bundled CNT structures; (4) a comparative analysis of crosstalk and delay for different single and bundled CNT structures; and (5) various unique mixed CNT bundle structures and their equivalent electrical models.

  3. VLSI PARTITIONING ALGORITHM WITH ADAPTIVE CONTROL PARAMETER

    Directory of Open Access Journals (Sweden)

    P. N. Filippenko

    2013-03-01

    Full Text Available The article deals with the problem of very large-scale integration circuit partitioning. A graph is selected as a mathematical model describing integrated circuit. Modification of ant colony optimization algorithm is presented, which is used to solve graph partitioning problem. Ant colony optimization algorithm is an optimization method based on the principles of self-organization and other useful features of the ants’ behavior. The proposed search system is based on ant colony optimization algorithm with the improved method of the initial distribution and dynamic adjustment of the control search parameters. The experimental results and performance comparison show that the proposed method of very large-scale integration circuit partitioning provides the better search performance over other well known algorithms.

  4. Global floor planning approach for VLSI design

    International Nuclear Information System (INIS)

    LaPotin, D.P.

    1986-01-01

    Within a hierarchical design environment, initial decisions regarding the partitioning and choice of module attributes greatly impact the quality of the resulting IC in terms of area and electrical performance. This dissertation presents a global floor-planning approach which allows designers to quickly explore layout issues during the initial stages of the IC design process. In contrast to previous efforts, which address the floor-planning problem from a strict module placement point of view, this approach considers floor-planning from an area planning point of view. The approach is based upon a combined min-cut and slicing paradigm, which ensures routability. To provide flexibility, modules may be specified as having a number of possible dimensions and orientations, and I/O pads as well as layout constraints are considered. A slicing-tree representation is employed, upon which a sequence of traversal operations are applied in order to obtain an area efficient layout. An in-place partitioning technique, which provides an improvement over previous min-cut and slicing-based efforts, is discussed. Global routing and module I/O pin assignment are provided for floor-plan evaluation purposes. A computer program, called Mason, has been developed which efficiently implements the approach and provides an interactive environment for designers to perform floor-planning. Performance of this program is illustrated via several industrial examples

  5. An Asynchronous Low Power and High Performance VLSI Architecture for Viterbi Decoder Implemented with Quasi Delay Insensitive Templates

    Directory of Open Access Journals (Sweden)

    T. Kalavathi Devi

    2015-01-01

    Full Text Available Convolutional codes are comprehensively used as Forward Error Correction (FEC codes in digital communication systems. For decoding of convolutional codes at the receiver end, Viterbi decoder is often used to have high priority. This decoder meets the demand of high speed and low power. At present, the design of a competent system in Very Large Scale Integration (VLSI technology requires these VLSI parameters to be finely defined. The proposed asynchronous method focuses on reducing the power consumption of Viterbi decoder for various constraint lengths using asynchronous modules. The asynchronous designs are based on commonly used Quasi Delay Insensitive (QDI templates, namely, Precharge Half Buffer (PCHB and Weak Conditioned Half Buffer (WCHB. The functionality of the proposed asynchronous design is simulated and verified using Tanner Spice (TSPICE in 0.25 µm, 65 nm, and 180 nm technologies of Taiwan Semiconductor Manufacture Company (TSMC. The simulation result illustrates that the asynchronous design techniques have 25.21% of power reduction compared to synchronous design and work at a speed of 475 MHz.

  6. Algorithms and Complexity Results for Genome Mapping Problems.

    Science.gov (United States)

    Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric

    2017-01-01

    Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.

  7. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  8. VLSI design of an RSA encryption/decryption chip using systolic array based architecture

    Science.gov (United States)

    Sun, Chi-Chia; Lin, Bor-Shing; Jan, Gene Eu; Lin, Jheng-Yi

    2016-09-01

    This article presents the VLSI design of a configurable RSA public key cryptosystem supporting the 512-bit, 1024-bit and 2048-bit based on Montgomery algorithm achieving comparable clock cycles of current relevant works but with smaller die size. We use binary method for the modular exponentiation and adopt Montgomery algorithm for the modular multiplication to simplify computational complexity, which, together with the systolic array concept for electric circuit designs effectively, lower the die size. The main architecture of the chip consists of four functional blocks, namely input/output modules, registers module, arithmetic module and control module. We applied the concept of systolic array to design the RSA encryption/decryption chip by using VHDL hardware language and verified using the TSMC/CIC 0.35 m 1P4 M technology. The die area of the 2048-bit RSA chip without the DFT is 3.9 × 3.9 mm2 (4.58 × 4.58 mm2 with DFT). Its average baud rate can reach 10.84 kbps under a 100 MHz clock.

  9. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  10. VLSI ARCHITECTURE FOR IMAGE COMPRESSION THROUGH ADDER MINIMIZATION TECHNIQUE AT DCT STRUCTURE

    Directory of Open Access Journals (Sweden)

    N.R. Divya

    2014-08-01

    Full Text Available Data compression plays a vital role in multimedia devices to present the information in a succinct frame. Initially, the DCT structure is used for Image compression, which has lesser complexity and area efficient. Similarly, 2D DCT also has provided reasonable data compression, but implementation concern, it calls more multipliers and adders thus its lead to acquire more area and high power consumption. To contain an account of all, this paper has been dealt with VLSI architecture for image compression using Rom free DA based DCT (Discrete Cosine Transform structure. This technique provides high-throughput and most suitable for real-time implementation. In order to achieve this image matrix is subdivided into odd and even terms then the multiplication functions are removed by shift and add approach. Kogge_Stone_Adder techniques are proposed for obtaining a bit-wise image quality which determines the new trade-off levels as compared to the previous techniques. Overall the proposed architecture produces reduced memory, low power consumption and high throughput. MATLAB is used as a funding tool for receiving an input pixel and obtaining output image. Verilog HDL is used for implementing the design, Model Sim for simulation, Quatres II is used to synthesize and obtain details about power and area.

  11. Varfarin in the complex treatment of antiphospholipid syndrome: preliminary results

    Directory of Open Access Journals (Sweden)

    T M Reshetnyak

    2003-01-01

    Full Text Available Objective. To assess efficacy and tolerance of varfarin in prophylaxis and therapy of thrombotic complications in patients with antiphospholipid syndrome (APS. Methods. 20 pts with APS (5 male and 15 female received varfarin during a year. 8 of them had primary APS (PAPS and 12 -systemic lupus erythematosus with APS (SLE+APS. 2 other pts (I with SLE+APS and I with PAPS received varfarin during the last 4 years. Nobody from 9 pts with PAPS received corticosteroids (CS. In SLE+APS pts CS dose varied from 4 to 20 mg/day and was not increased during follow up. During the study prothrombine time (PT was examined with thromboplastin ( manufactured by Renam having international sensitivity index 1,2 and international normalization relation (INR. Depending on treatment scheme APS pts were divided into 3 groups. Group 1 included 8 pts with INR<2,0, Group 2-7 with INR >3,0, group 3 - 7 pts with INR<2,0 receiving as additional treatment thrombo ASS 100 mg/day and vasonit from 600 to 1200 mg/day. Results. Two pts with INR = 1,8 had thrombosis recurrence (due to leg thrombophlebitis. There were no recurrences in other groups. 2 from 22 pts had "large" bleedings. "Small" bleedings episodes were noted in 7 from 22 pts. Largely that were subcutaneous bleedings (in 4 pts no more than 5 cm of size. Two pts receiving varfarin with INR 1,8 and 2,4 had renal colic. Conclusion. Our preliminary results prove the necessity of inclusion of varfarin in the treatment of pts with APS and thrombosis but intensive anticoagulant effect is not always desired.

  12. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  13. Monolithic active pixel sensors (MAPS) in a VLSI CMOS technology

    CERN Document Server

    Turchetta, R; Manolopoulos, S; Tyndel, M; Allport, P P; Bates, R; O'Shea, V; Hall, G; Raymond, M

    2003-01-01

    Monolithic Active Pixel Sensors (MAPS) designed in a standard VLSI CMOS technology have recently been proposed as a compact pixel detector for the detection of high-energy charged particle in vertex/tracking applications. MAPS, also named CMOS sensors, are already extensively used in visible light applications. With respect to other competing imaging technologies, CMOS sensors have several potential advantages in terms of low cost, low power, lower noise at higher speed, random access of pixels which allows windowing of region of interest, ability to integrate several functions on the same chip. This brings altogether to the concept of 'camera-on-a-chip'. In this paper, we review the use of CMOS sensors for particle physics and we analyse their performances in term of the efficiency (fill factor), signal generation, noise, readout speed and sensor area. In most of high-energy physics applications, data reduction is needed in the sensor at an early stage of the data processing before transfer of the data to ta...

  14. A second generation 50 Mbps VLSI level zero processing system prototype

    Science.gov (United States)

    Harris, Jonathan C.; Shi, Jeff; Speciale, Nick; Bennett, Toby

    1994-01-01

    Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies.

  15. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    Science.gov (United States)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  16. Synthesis of on-chip control circuits for mVLSI biochips

    DEFF Research Database (Denmark)

    Potluri, Seetal; Schneider, Alexander Rüdiger; Hørslev-Petersen, Martin

    2017-01-01

    them to laboratory environments. To address this issue, researchers have proposed methods to reduce the number of offchip pressure sources, through integration of on-chip pneumatic control logic circuits fabricated using three-layer monolithic membrane valve technology. Traditionally, mVLSI biochip......-chip control circuit design and (iii) the integration of on-chip control in the placement and routing design tasks. In this paper we present a design methodology for logic synthesis and physical synthesis of mVLSI biochips that use on-chip control. We show how the proposed methodology can be successfully...... applied to generate biochip layouts with integrated on-chip pneumatic control....

  17. Digital VLSI design with Verilog a textbook from Silicon Valley Technical Institute

    CERN Document Server

    Williams, John

    2008-01-01

    This unique textbook is structured as a step-by-step course of study along the lines of a VLSI IC design project. In a nominal schedule of 12 weeks, two days and about 10 hours per week, the entire verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer - deserializer, including synthesizable PLLs. Digital VLSI Design With Verilog is all an engineer needs for in-depth understanding of the verilog language: Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided on the

  18. Emergent auditory feature tuning in a real-time neuromorphic VLSI system

    Directory of Open Access Journals (Sweden)

    Sadique eSheik

    2012-02-01

    Full Text Available Many sounds of ecological importance, such as communication calls, are characterised by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamocortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP, which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectrotemporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step towards the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  19. Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.

    Science.gov (United States)

    Yu, Theodore; Cauwenberghs, Gert

    2009-01-01

    We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.

  20. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System.

    Science.gov (United States)

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas; Chicca, Elisabetta

    2012-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  1. CASTOR a VLSI CMOS mixed analog-digital circuit for low noise multichannel counting applications

    International Nuclear Information System (INIS)

    Comes, G.; Loddo, F.; Hu, Y.; Kaplon, J.; Ly, F.; Turchetta, R.; Bonvicini, V.; Vacchi, A.

    1996-01-01

    In this paper we present the design and first experimental results of a VLSI mixed analog-digital 1.2 microns CMOS circuit (CASTOR) for multichannel radiation detectors applications demanding low noise amplification and counting of radiation pulses. This circuit is meant to be connected to pixel-like detectors. Imaging can be obtained by counting the number of hits in each pixel during a user-controlled exposure time. Each channel of the circuit features an analog and a digital part. In the former one, a charge preamplifier is followed by a CR-RC shaper with an output buffer and a threshold discriminator. In the digital part, a 16-bit counter is present together with some control logic. The readout of the counters is done serially on a common tri-state output. Daisy-chaining is possible. A 4-channel prototype has been built. This prototype has been optimised for use in the digital radiography Syrmep experiment at the Elettra synchrotron machine in Trieste (Italy): its main design parameters are: shaping time of about 850 ns, gain of 190 mV/fC and ENC (e - rms)=60+17 C (pF). The counting rate per channel, limited by the analog part, can be as high as about 200 kHz. Characterisation of the circuit and first tests with silicon microstrip detectors are presented. They show the circuit works according to design specification and can be used for imaging applications. (orig.)

  2. A novel VLSI processor for high-rate, high resolution spectroscopy

    CERN Document Server

    Pullia, Antonio; Gatti, E; Longoni, A; Buttler, W

    2000-01-01

    A novel time-variant VLSI shaper amplifier, suitable for multi-anode Silicon Drift Detectors or other multi-element solid-state X-ray detection systems, is proposed. The new read-out scheme has been conceived for demanding applications with synchrotron light sources, such as X-ray holography or EXAFS, where both high count-rates and high-energy resolutions are required. The circuit is of the linear time-variant class, accepts randomly distributed events and features: a finite-width (1-10 mu s) quasi-optimal weight function, an ultra-low-level energy discrimination (approx 150 eV), and a full compatibility for monolithic integration in CMOS technology. Its impulse response has a staircase-like shape, but the weight function (which is in general different from the impulse response in time-variant systems) is quasi trapezoidal. The operation principles of the new scheme as well as the first experimental results obtained with a prototype of the circuit are presented and discussed in the work.

  3. High-energy heavy ion testing of VLSI devices for single event ...

    Indian Academy of Sciences (India)

    Unknown

    per describes the high-energy heavy ion radiation testing of VLSI devices for single event upset (SEU) ... The experimental set up employed to produce low flux of heavy ions viz. silicon ... through which they pass, leaving behind a wake of elec- ... for use in Bus Management Unit (BMU) and bulk CMOS ... was scheduled.

  4. Implementation of a VLSI Level Zero Processing system utilizing the functional component approach

    Science.gov (United States)

    Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.

    1991-01-01

    A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.

  5. An area-efficient path memory structure for VLSI Implementation of high speed Viterbi decoders

    DEFF Research Database (Denmark)

    Paaske, Erik; Pedersen, Steen; Sparsø, Jens

    1991-01-01

    Path storage and selection methods for Viterbi decoders are investigated with special emphasis on VLSI implementations. Two well-known algorithms, the register exchange, algorithm, REA, and the trace back algorithm, TBA, are considered. The REA requires the smallest number of storage elements...

  6. VLSI top-down design based on the separation of hierarchies

    NARCIS (Netherlands)

    Spaanenburg, L.; Broekema, A.; Leenstra, J.; Huys, C.

    1986-01-01

    Despite the presence of structure, interactions between the three views on VLSI design still lead to lengthy iterations. By separating the hierarchies for the respective views, the interactions are reduced. This separated hierarchy allows top-down design with functional abstractions as exemplified

  7. CMOS VLSI Active-Pixel Sensor for Tracking

    Science.gov (United States)

    Pain, Bedabrata; Sun, Chao; Yang, Guang; Heynssens, Julie

    2004-01-01

    An architecture for a proposed active-pixel sensor (APS) and a design to implement the architecture in a complementary metal oxide semiconductor (CMOS) very-large-scale integrated (VLSI) circuit provide for some advanced features that are expected to be especially desirable for tracking pointlike features of stars. The architecture would also make this APS suitable for robotic- vision and general pointing and tracking applications. CMOS imagers in general are well suited for pointing and tracking because they can be configured for random access to selected pixels and to provide readout from windows of interest within their fields of view. However, until now, the architectures of CMOS imagers have not supported multiwindow operation or low-noise data collection. Moreover, smearing and motion artifacts in collected images have made prior CMOS imagers unsuitable for tracking applications. The proposed CMOS imager (see figure) would include an array of 1,024 by 1,024 pixels containing high-performance photodiode-based APS circuitry. The pixel pitch would be 9 m. The operations of the pixel circuits would be sequenced and otherwise controlled by an on-chip timing and control block, which would enable the collection of image data, during a single frame period, from either the full frame (that is, all 1,024 1,024 pixels) or from within as many as 8 different arbitrarily placed windows as large as 8 by 8 pixels each. A typical prior CMOS APS operates in a row-at-a-time ( grolling-shutter h) readout mode, which gives rise to exposure skew. In contrast, the proposed APS would operate in a sample-first/readlater mode, suppressing rolling-shutter effects. In this mode, the analog readout signals from the pixels corresponding to the windows of the interest (which windows, in the star-tracking application, would presumably contain guide stars) would be sampled rapidly by routing them through a programmable diagonal switch array to an on-chip parallel analog memory array. The

  8. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  9. International Conference on VLSI, Communication, Advanced Devices, Signals & Systems and Networking

    CERN Document Server

    Shirur, Yasha; Prasad, Rekha

    2013-01-01

    This book is a collection of papers presented by renowned researchers, keynote speakers and academicians in the International Conference on VLSI, Communication, Analog Designs, Signals and Systems, and Networking (VCASAN-2013), organized by B.N.M. Institute of Technology, Bangalore, India during July 17-19, 2013. The book provides global trends in cutting-edge technologies in electronics and communication engineering. The content of the book is useful to engineers, researchers and academicians as well as industry professionals.

  10. Computer-aided design of microfluidic very large scale integration (mVLSI) biochips design automation, testing, and design-for-testability

    CERN Document Server

    Hu, Kai; Ho, Tsung-Yi

    2017-01-01

    This book provides a comprehensive overview of flow-based, microfluidic VLSI. The authors describe and solve in a comprehensive and holistic manner practical challenges such as control synthesis, wash optimization, design for testability, and diagnosis of modern flow-based microfluidic biochips. They introduce practical solutions, based on rigorous optimization and formal models. The technical contributions presented in this book will not only shorten the product development cycle, but also accelerate the adoption and further development of modern flow-based microfluidic biochips, by facilitating the full exploitation of design complexities that are possible with current fabrication techniques. Offers the first practical problem formulation for automated control-layer design in flow-based microfluidic biochips and provides a systematic approach for solving this problem; Introduces a wash-optimization method for cross-contamination removal; Presents a design-for-testability (DfT) technique that can achieve 100...

  11. How to build VLSI-efficient neural chips

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-02-01

    This paper presents several upper and lower bounds for the number-of-bits required for solving a classification problem, as well as ways in which these bounds can be used to efficiently build neural network chips. The focus will be on complexity aspects pertaining to neural networks: (1) size complexity and depth (size) tradeoffs, and (2) precision of weights and thresholds as well as limited interconnectivity. They show difficult problems-exponential growth in either space (precision and size) and/or time (learning and depth)-when using neural networks for solving general classes of problems (particular cases may enjoy better performances). The bounds for the number-of-bits required for solving a classification problem represent the first step of a general class of constructive algorithms, by showing how the quantization of the input space could be done in O (m{sup 2}n) steps. Here m is the number of examples, while n is the number of dimensions. The second step of the algorithm finds its roots in the implementation of a class of Boolean functions using threshold gates. It is substantiated by mathematical proofs for the size O (mn/{Delta}), and the depth O [log(mn)/log{Delta}] of the resulting network (here {Delta} is the maximum fan in). Using the fan in as a parameter, a full class of solutions can be designed. The third step of the algorithm represents a reduction of the size and an increase of its generalization capabilities. Extensions by using analogue COMPARISONs, allows for real inputs, and increase the generalization capabilities at the expense of longer training times. Finally, several solutions which can lower the size of the resulting neural network are detailed. The interesting aspect is that they are obtained for limited, or even constant, fan-ins. In support of these claims many simulations have been performed and are called upon.

  12. Hand-assisted Approach as a Model to Teach Complex Laparoscopic Hepatectomies: Preliminary Results.

    Science.gov (United States)

    Makdissi, Fabio F; Jeismann, Vagner B; Kruger, Jaime A P; Coelho, Fabricio F; Ribeiro-Junior, Ulysses; Cecconello, Ivan; Herman, Paulo

    2017-08-01

    Currently, there are limited and scarce models to teach complex liver resections by laparoscopy. The aim of this study is to present a hand-assisted technique to teach complex laparoscopic hepatectomies for fellows in liver surgery. Laparoscopic hand-assisted approach for resections of liver lesions located in posterosuperior segments (7, 6/7, 7/8, 8) was performed by the trainees with guidance and intermittent intervention of a senior surgeon. Data as: (1) percentage of time that the senior surgeon takes the surgery as main surgeon, (2) need for the senior surgeon to finish the procedure, (3) necessity of conversion, (4) bleeding with hemodynamic instability, (5) need for transfusion, (6) oncological surgical margins, were evaluated. In total, 12 cases of complex laparoscopic liver resections were performed by the trainee. All cases included deep lesions situated on liver segments 7 or 8. The senior surgeon intervention occurred in a mean of 20% of the total surgical time (range, 0% to 50%). A senior intervention >20% was necessary in 2 cases. There was no need for conversion or reoperation. Neither major bleeding nor complications resulted from the teaching program. All surgical margins were clear. This preliminary report shows that hand-assistance is a safe way to teach complex liver resections without compromising patient safety or oncological results. More cases are still necessary to draw definitive conclusions about this teaching method.

  13. Treatment of complex PTSD: results of the ISTSS expert clinician survey on best practices.

    Science.gov (United States)

    Cloitre, Marylene; Courtois, Christine A; Charuvastra, Anthony; Carapezza, Richard; Stolbach, Bradley C; Green, Bonnie L

    2011-12-01

    This study provides a summary of the results of an expert opinion survey initiated by the International Society for Traumatic Stress Studies Complex Trauma Task Force regarding best practices for the treatment of complex posttraumatic stress disorder (PTSD). Ratings from a mail-in survey from 25 complex PTSD experts and 25 classic PTSD experts regarding the most appropriate treatment approaches and interventions for complex PTSD were examined for areas of consensus and disagreement. Experts agreed on several aspects of treatment, with 84% endorsing a phase-based or sequenced therapy as the most appropriate treatment approach with interventions tailored to specific symptom sets. First-line interventions matched to specific symptoms included emotion regulation strategies, narration of trauma memory, cognitive restructuring, anxiety and stress management, and interpersonal skills. Meditation and mindfulness interventions were frequently identified as an effective second-line approach for emotional, attentional, and behavioral (e.g., aggression) disturbances. Agreement was not obtained on either the expected course of improvement or on duration of treatment. The survey results provide a strong rationale for conducting research focusing on the relative merits of traditional trauma-focused therapies and sequenced multicomponent approaches applied to different patient populations with a range of symptom profiles. Sustained symptom monitoring during the course of treatment and during extended follow-up would advance knowledge about both the speed and durability of treatment effects. Copyright © 2011 International Society for Traumatic Stress Studies.

  14. The Study for Results of Complex Cystic Breast Masses by Biopsy on Ultrasound

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hye Kyoung [Dept. of Radiology, Yangji General Hospital, Kwangju (Korea, Republic of); Dong, Kyung Rae [Dept. of Radiological Technology, Gwangju Health College, Kwangju (Korea, Republic of)

    2008-06-15

    We examined the roles of Ultrasonography conductors by analyzing the results of tissue biopsy of complex cystic masse under the guidance of breast US. This study was performed to a group of 178 who showed breast US indicating complex cystic masses among 342 patients who were definitely diagnosed by tissue biopsies and operations in our hospital from June 30th, 2003 to June 30th, 2007. The evaluation of tissues around, calcification, the distribution state of blood flow were excluded from the analysis subjects and logic 200 made by GE corporation and gun for core biopsy(Kimal corp., K7/MBD23) were used in this study. The biopsy results of 178 subjects showed FCC (fibrocystic change)(n=56 : 31.4%), Fibrosis (n=41 : 23.0%), Fibroadenoma (n=20 : 11.2%), Epithelial hyperplasia (n=17 : 9.6%), Carcinoma (n=15 : 8.4%), Fibroadipose (n=8 : 4.5%), Sclerosing adenosis (n=7 : 3.9%), Duct ectasia (n=5 : 2.8%), Papiloma (n=5 : 2.8%), and Fat necrosis (n=1 : 0.6%), Hemangioma (n=1 : 0.6%), Abscess (n=1 : 0.6%), Dystrophic calcification(n=1 : 0.6%). The US showed that the results of the tissue biopsy of complex cystic masses were mostly carcinoma(8.4%). Most of them were benign and only 9.6% of epithelial hyperplasia which has high progression rate into malignant tumors epidemically showed malignancy. Most of them were included in the spectrum of fibrous cystic nodule. Even though these results are confirmed, further studies are required. As a result, a nodule which is not certified by US should be right to take the tissue biopsy, but if it's difficult due to patients or another reasons, re-check tests in three months are required. And systemic ultrasonography evaluation should be well recognized to conduct more careful and specific tests.

  15. The Study for Results of Complex Cystic Breast Masses by Biopsy on Ultrasound

    International Nuclear Information System (INIS)

    Kang, Hye Kyoung; Dong, Kyung Rae

    2008-01-01

    We examined the roles of Ultrasonography conductors by analyzing the results of tissue biopsy of complex cystic masse under the guidance of breast US. This study was performed to a group of 178 who showed breast US indicating complex cystic masses among 342 patients who were definitely diagnosed by tissue biopsies and operations in our hospital from June 30th, 2003 to June 30th, 2007. The evaluation of tissues around, calcification, the distribution state of blood flow were excluded from the analysis subjects and logic 200 made by GE corporation and gun for core biopsy(Kimal corp., K7/MBD23) were used in this study. The biopsy results of 178 subjects showed FCC (fibrocystic change)(n=56 : 31.4%), Fibrosis (n=41 : 23.0%), Fibroadenoma (n=20 : 11.2%), Epithelial hyperplasia (n=17 : 9.6%), Carcinoma (n=15 : 8.4%), Fibroadipose (n=8 : 4.5%), Sclerosing adenosis (n=7 : 3.9%), Duct ectasia (n=5 : 2.8%), Papiloma (n=5 : 2.8%), and Fat necrosis (n=1 : 0.6%), Hemangioma (n=1 : 0.6%), Abscess (n=1 : 0.6%), Dystrophic calcification(n=1 : 0.6%). The US showed that the results of the tissue biopsy of complex cystic masses were mostly carcinoma(8.4%). Most of them were benign and only 9.6% of epithelial hyperplasia which has high progression rate into malignant tumors epidemically showed malignancy. Most of them were included in the spectrum of fibrous cystic nodule. Even though these results are confirmed, further studies are required. As a result, a nodule which is not certified by US should be right to take the tissue biopsy, but if it's difficult due to patients or another reasons, re-check tests in three months are required. And systemic ultrasonography evaluation should be well recognized to conduct more careful and specific tests.

  16. Mixed-Dimensionality VLSI-Type Configurable Tools for Virtual Prototyping of Biomicrofluidic Devices and Integrated Systems

    Science.gov (United States)

    Makhijani, Vinod B.; Przekwas, Andrzej J.

    2002-10-01

    This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).

  17. New domain for image analysis: VLSI circuits testing, with Romuald, specialized in parallel image processing

    Energy Technology Data Exchange (ETDEWEB)

    Rubat Du Merac, C; Jutier, P; Laurent, J; Courtois, B

    1983-07-01

    This paper describes some aspects of specifying, designing and evaluating a specialized machine, Romuald, for the capture, coding, and processing of video and scanning electron microscope (SEM) pictures. First the authors present the functional organization of the process unit of romuald and its hardware, giving details of its behaviour. Then they study the capture and display unit which, thanks to its flexibility, enables SEM images coding. Finally, they describe an application which is now being developed in their laboratory: testing VLSI circuits with new methods: sem+voltage contrast and image processing. 15 references.

  18. Vlsi implementation of flexible architecture for decision tree classification in data mining

    Science.gov (United States)

    Sharma, K. Venkatesh; Shewandagn, Behailu; Bhukya, Shankar Nayak

    2017-07-01

    The Data mining algorithms have become vital to researchers in science, engineering, medicine, business, search and security domains. In recent years, there has been a terrific raise in the size of the data being collected and analyzed. Classification is the main difficulty faced in data mining. In a number of the solutions developed for this problem, most accepted one is Decision Tree Classification (DTC) that gives high precision while handling very large amount of data. This paper presents VLSI implementation of flexible architecture for Decision Tree classification in data mining using c4.5 algorithm.

  19. Positron emission tomographic images and expectation maximization: A VLSI architecture for multiple iterations per second

    International Nuclear Information System (INIS)

    Jones, W.F.; Byars, L.G.; Casey, M.E.

    1988-01-01

    A digital electronic architecture for parallel processing of the expectation maximization (EM) algorithm for Positron Emission tomography (PET) image reconstruction is proposed. Rapid (0.2 second) EM iterations on high resolution (256 x 256) images are supported. Arrays of two very large scale integration (VLSI) chips perform forward and back projection calculations. A description of the architecture is given, including data flow and partitioning relevant to EM and parallel processing. EM images shown are produced with software simulating the proposed hardware reconstruction algorithm. Projected cost of the system is estimated to be small in comparison to the cost of current PET scanners

  20. Daily radiotoxicological supervision of personnel at the Pierrelatte industrial complex. Methods and results

    International Nuclear Information System (INIS)

    Chalabreysse, Jacques.

    1978-05-01

    A 13 year experience gained from daily radiotoxicological supervision of personnel at the PIERRELATTE industrial complex is presented. This study is divided into two parts: part one is theoretical: bibliographical synthesis of all scattered documents and publications; a homogeneous survey of all literature on the subject is thus available. Part two reviews the experience gained in professional surroundings: laboratory measurements and analyses (development of methods and daily applications); mathematical formulae to answer the first questions which arise before an individual liable to be contaminated; results obtained at PIERRELATTE [fr

  1. Purification of Ovine Respiratory Complex I Results in a Highly Active and Stable Preparation*

    Science.gov (United States)

    Letts, James A.; Degliesposti, Gianluca; Fiedorczuk, Karol; Skehel, Mark; Sazanov, Leonid A.

    2016-01-01

    NADH-ubiquinone oxidoreductase (complex I) is the largest (∼1 MDa) and the least characterized complex of the mitochondrial electron transport chain. Because of the ease of sample availability, previous work has focused almost exclusively on bovine complex I. However, only medium resolution structural analyses of this complex have been reported. Working with other mammalian complex I homologues is a potential approach for overcoming these limitations. Due to the inherent difficulty of expressing large membrane protein complexes, screening of complex I homologues is limited to large mammals reared for human consumption. The high sequence identity among these available sources may preclude the benefits of screening. Here, we report the characterization of complex I purified from Ovis aries (ovine) heart mitochondria. All 44 unique subunits of the intact complex were identified by mass spectrometry. We identified differences in the subunit composition of subcomplexes of ovine complex I as compared with bovine, suggesting differential stability of inter-subunit interactions within the complex. Furthermore, the 42-kDa subunit, which is easily lost from the bovine enzyme, remains tightly bound to ovine complex I. Additionally, we developed a novel purification protocol for highly active and stable mitochondrial complex I using the branched-chain detergent lauryl maltose neopentyl glycol. Our data demonstrate that, although closely related, significant differences exist between the biochemical properties of complex I prepared from ovine and bovine mitochondria and that ovine complex I represents a suitable alternative target for further structural studies. PMID:27672209

  2. A novel low-voltage low-power analogue VLSI implementation of neural networks with on-chip back-propagation learning

    Science.gov (United States)

    Carrasco, Manuel; Garde, Andres; Murillo, Pilar; Serrano, Luis

    2005-06-01

    In this paper a novel design and implementation of a VLSI Analogue Neural Net based on Multi-Layer Perceptron (MLP) with on-chip Back Propagation (BP) learning algorithm suitable for the resolution of classification problems is described. In order to implement a general and programmable analogue architecture, the design has been carried out in a hierarchical way. In this way the net has been divided in synapsis-blocks and neuron-blocks providing an easy method for the analysis. These blocks basically consist on simple cells, which are mainly, the activation functions (NAF), derivatives (DNAF), multipliers and weight update circuits. The analogue design is based on current-mode translinear techniques using MOS transistors working in the weak inversion region in order to reduce both the voltage supply and the power consumption. Moreover, with the purpose of minimizing the noise, offset and distortion of even order, the topologies are fully-differential and balanced. The circuit, named ANNE (Analogue Neural NEt), has been prototyped and characterized as a proof of concept on CMOS AMI-0.5A technology occupying a total area of 2.7mm2. The chip includes two versions of neural nets with on-chip BP learning algorithm, which are respectively a 2-1 and a 2-2-1 implementations. The proposed nets have been experimentally tested using supply voltages from 2.5V to 1.8V, which is suitable for single cell lithium-ion battery supply applications. Experimental results of both implementations included in ANNE exhibit a good performance on solving classification problems. These results have been compared with other proposed Analogue VLSI implementations of Neural Nets published in the literature demonstrating that our proposal is very efficient in terms of occupied area and power consumption.

  3. Built-in self-repair of VLSI memories employing neural nets

    Science.gov (United States)

    Mazumder, Pinaki

    1998-10-01

    The decades of the Eighties and the Nineties have witnessed the spectacular growth of VLSI technology, when the chip size has increased from a few hundred devices to a staggering multi-millon transistors. This trend is expected to continue as the CMOS feature size progresses towards the nanometric dimension of 100 nm and less. SIA roadmap projects that, where as the DRAM chips will integrate over 20 billion devices in the next millennium, the future microprocessors may incorporate over 100 million transistors on a single chip. As the VLSI chip size increase, the limited accessibility of circuit components poses great difficulty for external diagnosis and replacement in the presence of faulty components. For this reason, extensive work has been done in built-in self-test techniques, but little research is known concerning built-in self-repair. Moreover, the extra hardware introduced by conventional fault-tolerance techniques is also likely to become faulty, therefore causing the circuit to be useless. This research demonstrates the feasibility of implementing electronic neural networks as intelligent hardware for memory array repair. Most importantly, we show that the neural network control possesses a robust and degradable computing capability under various fault conditions. Overall, a yield analysis performed on 64K DRAM's shows that the yield can be improved from as low as 20 percent to near 99 percent due to the self-repair design, with overhead no more than 7 percent.

  4. VLSI Design Tools, Reference Manual, Release 2.0.

    Science.gov (United States)

    1984-08-01

    eder. 2.3 ITACV: Libary ofC readne. far oesumdg a layoit 1-,, tiling. V ~2.4 "QUILT: CeinS"Wbesa-i-M-8euar ray f atwok til 2.5 "TIL: Tockmeleff...8217patterns package was added so that complex and repetitive digital waveforms could be generated far more easily. The recently written program MTP (Multiple...circuit model to estimate timing delays through digital circuits. It also has a mode that allows it to be used as a switch (gate) level simulator

  5. The results of complex radiation-hygienic survey of the reference settlements in Mogilev region

    International Nuclear Information System (INIS)

    Ageeva, T.N.; Chegerova, T.I.; Shchur, A.V.; Shapsheeva, T.P.; Lipnitskij, L.V.

    2011-01-01

    The results of complex radiation-hygienic survey of the reference settlements located on the radioactively contaminated territory have been presents in the article. The four-year dynamics of the internal exposure doses of the reference settlements' inhabitants and their relationship with the 137 Cs content in foods consumed by the population have been shown. It was ascertained that there are still some isolated individuals with high doses of internal radiation among the surveyed population, which have the significant influence on the average annual radiation dose for the inhabitants and dose of its critical group. The external exposure individual doses of the inhabitants and the results of measuring of the gamma radiation dose rate in place of the settlements have been analyzed. It have been expressed the opinion about need entering adjustment in the measuring techniques of external doses. (authors)

  6. Application of Semiempirical Methods to Transition Metal Complexes: Fast Results but Hard-to-Predict Accuracy.

    KAUST Repository

    Minenkov, Yury

    2018-05-22

    A series of semiempirical PM6* and PM7 methods has been tested in reproducing of relative conformational energies of 27 realistic-size complexes of 16 different transition metals (TMs). An analysis of relative energies derived from single-point energy evaluations on density functional theory (DFT) optimized conformers revealed pronounced deviations between semiempirical and DFT methods indicating fundamental difference in potential energy surfaces (PES). To identify the origin of the deviation, we compared fully optimized PM7 and respective DFT conformers. For many complexes, differences in PM7 and DFT conformational energies have been confirmed often manifesting themselves in false coordination of some atoms (H, O) to TMs and chemical transformations/distortion of coordination center geometry in PM7 structures. Despite geometry optimization with fixed coordination center geometry leads to some improvements in conformational energies, the resulting accuracy is still too low to recommend explored semiempirical methods for out-of-the-box conformational search/sampling: careful testing is always needed.

  7. Laboratory results of stress corrosion cracking of steam generator tubes in a complex environment - An update

    Energy Technology Data Exchange (ETDEWEB)

    Horner, Olivier; Pavageau, Ellen-Mary; Vaillant, Francois [EDF R and D, Materials and Mechanics of Components Department, 77818 Moret-sur-Loing (France); Bouvier, Odile de [EDF Nuclear Engineering Division, Centre d' Expertise et d' Inspection dans les Domaines de la Realisation et de l' Exploitation, 93206 Saint Denis (France)

    2004-07-01

    Stress corrosion cracking occurs in the flow-restricted areas on the secondary side of steam generator tubes of Pressured Water Reactors (PWR), where water pollutants are likely to concentrate. Chemical analyses carried out during the shutdowns gave some insight into the chemical composition of these areas, which has evolved during these last years (i.e. less sodium as pollutants). It has been modeled in laboratory by tests in two different typical environments: the sodium hydroxide and the sulfate environments. These models satisfactorily describe the secondary side corrosion of steam generator tubes for old plant units. Furthermore, a third typical environment - the complex environment - which corresponds to an All Volatile Treatment (AVT) environment containing alumina, silica, phosphate and acetic acid has been recently studied. This particular environment satisfactorily reproduces the composition of the deposits observed on the surface of the steam generator tubes as well as the degradation of the tubes. A review of the recent laboratory results obtained by considering the complex environment are presented here. Several tests have been carried out in order to study initiation and propagation of secondary side corrosion cracking for some selected materials in such an environment. 600 Thermally Treated (TT) alloy reveals to be less sensitive to secondary side corrosion cracking than 600 Mill Annealed (MA) alloy. Finally, the influence of some related factors like stress, temperature and environmental factors are discussed. (authors)

  8. Laboratory results of stress corrosion cracking of steam generator tubes in a complex environment - An update

    International Nuclear Information System (INIS)

    Horner, Olivier; Pavageau, Ellen-Mary; Vaillant, Francois; Bouvier, Odile de

    2004-01-01

    Stress corrosion cracking occurs in the flow-restricted areas on the secondary side of steam generator tubes of Pressured Water Reactors (PWR), where water pollutants are likely to concentrate. Chemical analyses carried out during the shutdowns gave some insight into the chemical composition of these areas, which has evolved during these last years (i.e. less sodium as pollutants). It has been modeled in laboratory by tests in two different typical environments: the sodium hydroxide and the sulfate environments. These models satisfactorily describe the secondary side corrosion of steam generator tubes for old plant units. Furthermore, a third typical environment - the complex environment - which corresponds to an All Volatile Treatment (AVT) environment containing alumina, silica, phosphate and acetic acid has been recently studied. This particular environment satisfactorily reproduces the composition of the deposits observed on the surface of the steam generator tubes as well as the degradation of the tubes. A review of the recent laboratory results obtained by considering the complex environment are presented here. Several tests have been carried out in order to study initiation and propagation of secondary side corrosion cracking for some selected materials in such an environment. 600 Thermally Treated (TT) alloy reveals to be less sensitive to secondary side corrosion cracking than 600 Mill Annealed (MA) alloy. Finally, the influence of some related factors like stress, temperature and environmental factors are discussed. (authors)

  9. Neuromorphic VLSI vision system for real-time texture segregation.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2008-10-01

    The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.

  10. Purification of Ovine Respiratory Complex I Results in a Highly Active and Stable Preparation.

    Science.gov (United States)

    Letts, James A; Degliesposti, Gianluca; Fiedorczuk, Karol; Skehel, Mark; Sazanov, Leonid A

    2016-11-18

    NADH-ubiquinone oxidoreductase (complex I) is the largest (∼1 MDa) and the least characterized complex of the mitochondrial electron transport chain. Because of the ease of sample availability, previous work has focused almost exclusively on bovine complex I. However, only medium resolution structural analyses of this complex have been reported. Working with other mammalian complex I homologues is a potential approach for overcoming these limitations. Due to the inherent difficulty of expressing large membrane protein complexes, screening of complex I homologues is limited to large mammals reared for human consumption. The high sequence identity among these available sources may preclude the benefits of screening. Here, we report the characterization of complex I purified from Ovis aries (ovine) heart mitochondria. All 44 unique subunits of the intact complex were identified by mass spectrometry. We identified differences in the subunit composition of subcomplexes of ovine complex I as compared with bovine, suggesting differential stability of inter-subunit interactions within the complex. Furthermore, the 42-kDa subunit, which is easily lost from the bovine enzyme, remains tightly bound to ovine complex I. Additionally, we developed a novel purification protocol for highly active and stable mitochondrial complex I using the branched-chain detergent lauryl maltose neopentyl glycol. Our data demonstrate that, although closely related, significant differences exist between the biochemical properties of complex I prepared from ovine and bovine mitochondria and that ovine complex I represents a suitable alternative target for further structural studies. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  11. Results of complex annual parasitological monitoring in the coastal area of Kola Bay

    Science.gov (United States)

    Kuklin, V. V.; Kuklina, M. M.; Kisova, N. E.; Maslich, M. A.

    2009-12-01

    The results of annual parasitological monitoring in the coastal area near the Abram-mys (Kola Bay, Barents Sea) are presented. The studies were performed in 2006-2007 and included complex examination of the intermediate hosts (mollusks and crustaceans) and definitive hosts (marine fish and birds) of the helminths. The biodiversity of the parasite fauna, seasonal dynamics, and functioning patterns of the parasite systems were investigated. The basic regularities in parasite circulation were assessed in relation to their life cycle strategies and the ecological features of the intermediate and definitive hosts. The factors affecting the success of parasite circulation in the coastal ecosystems were revealed through analysis of parasite biodiversity and abundance dynamics.

  12. Selective Cerebro-Myocardial Perfusion in Complex Neonatal Aortic Arch Pathology: Midterm Results.

    Science.gov (United States)

    Hoxha, Stiljan; Abbasciano, Riccardo Giuseppe; Sandrini, Camilla; Rossetti, Lucia; Menon, Tiziano; Barozzi, Luca; Linardi, Daniele; Rungatscher, Alessio; Faggian, Giuseppe; Luciani, Giovanni Battista

    2018-04-01

    Aortic arch repair in newborns and infants has traditionally been accomplished using a period of deep hypothermic circulatory arrest. To reduce neurologic and cardiac dysfunction related to circulatory arrest and myocardial ischemia during complex aortic arch surgery, an alternative and novel strategy for cerebro-myocardial protection was recently developed, where regional low-flow perfusion is combined with controlled and independent coronary perfusion. The aim of the present retrospective study was to assess short-term and mid-term results of selective and independent cerebro-myocardial perfusion in neonatal aortic arch surgery. From April 2008 to August 2015, 28 consecutive neonates underwent aortic arch surgery under cerebro-myocardial perfusion. There were 17 male and 11 female, with median age of 15 days (3-30 days) and median body weight of 3 kg (1.6-4.2 kg), 9 (32%) of whom with low body weight (cerebro-myocardial perfusion was 30 ± 11 min (15-69 min). Renal dysfunction, requiring a period of peritoneal dialysis was observed in 10 (36%) patients, while liver dysfunction was noted only in 3 (11%). There were three (11%) early and two late deaths during a median follow-up of 2.9 years (range 6 months-7.7 years), with an actuarial survival of 82% at 7 years. At latest follow-up, no patient showed signs of cardiac or neurologic dysfunction. The present experience shows that a strategy of selective and independent cerebro-myocardial perfusion is safe, versatile, and feasible in high-risk neonates with complex congenital arch pathology. Encouraging outcomes were noted in terms of cardiac and neurological function, with limited end-organ morbidity. © 2018 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  13. Anticancer Activity of Polyoxometalate-Bisphosphonate Complexes: Synthesis, Characterization, In Vitro and In Vivo Results.

    Science.gov (United States)

    Boulmier, Amandine; Feng, Xinxin; Oms, Olivier; Mialane, Pierre; Rivière, Eric; Shin, Christopher J; Yao, Jiaqi; Kubo, Tadahiko; Furuta, Taisuke; Oldfield, Eric; Dolbecq, Anne

    2017-07-03

    We synthesized a series of polyoxometalate-bisphosphonate complexes containing Mo VI O 6 octahedra, zoledronate, or an N-alkyl (n-C 6 or n-C 8 ) zoledronate analogue, and in two cases, Mn as a heterometal. Mo 6 L 2 (L = Zol, ZolC 6 , ZolC 8 ) and Mo 4 L 2 Mn (L = Zol, ZolC 8 ) were characterized by using single-crystal X-ray crystallography and/or IR spectroscopy, elemental and energy dispersive X-ray analysis and 31 P NMR. We found promising activity against human nonsmall cell lung cancer (NCI-H460) cells with IC 50 values for growth inhibition of ∼5 μM per bisphosphonate ligand. The effects of bisphosphonate complexation on IC 50 decreased with increasing bisphosphonate chain length: C 0 ≈ 6.1×, C 6 ≈ 3.4×, and C 8 ≈ 1.1×. We then determined the activity of one of the most potent compounds in the series, Mo 4 Zol 2 Mn(III), against SK-ES-1 sarcoma cells in a mouse xenograft system finding a ∼5× decrease in tumor volume than found with the parent compound zoledronate at the same compound dosing (5 μg/mouse). Overall, the results are of interest since we show for the first time that heteropolyoxomolybdate-bisphosphonate hybrids kill tumor cells in vitro and significantly decrease tumor growth, in vivo, opening up new possibilities for targeting both Ras as well as epidermal growth factor receptor driven cancers.

  14. Principles of VLSI RTL design a practical guide

    CERN Document Server

    Churiwala, Sanjay; Gianfagna, Mike

    2011-01-01

    This book examines the impact of register transfer level (RTL) design choices that may result in issues of testability, data synchronization across clock domains, synthesizability, power consumption and routability, that appear later in the product lifecycle.

  15. Complex conductivity results to silver nanoparticles in partically saturated laboratory columns

    Data.gov (United States)

    U.S. Environmental Protection Agency — Laboratory complex conductivity data from partially saturated sand columns with silver nanoparticles. This dataset is not publicly accessible because: It involves...

  16. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    Science.gov (United States)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run

  17. Care complexity in the general hospital - Results from a European study

    NARCIS (Netherlands)

    de Jonge, P; Huyse, FJ; Slaets, JPJ; Herzog, T; Lobo, A; Lyons, JS; Opmeer, BC; Stein, B; Arolt, [No Value; Balogh, N; Cardoso, G; Fink, P; Rigatelli, M; van Dijck, R; Mellenbergh, GJ

    2001-01-01

    There is increasing pressure to effectively treat patients with complex care needs from the moment of admission to the general hospital. In this study, the authors developed a measurement strategy for hospital-based care complexity. The authors' four-factor model describes the interrelations between

  18. The Results of Complex Research of GSS "SBIRS-Geo 2" Behavior in the Orbit

    Science.gov (United States)

    Sukhov, P. P.; Epishev, V. P.; Sukhov, K. P.; Karpenko, G. F.; Motrunich, I. I.

    2017-04-01

    The new generation of geosynchronous satellites SBIRS of US Air Force early warning system series (Satellite Early Warning System) replaced the previous DSP-satellite series (Defense Support Program). Currently from the territory of Ukraine, several GSS of DSP series and one "SBIRS-Geo 2" are available to observation. During two years of observations, we have received and analyzed for two satellites more than 30 light curves in B, V, R photometric system. As a result of complex research, we propose a model of "SBIRS-Geo" 2 orbital behavior compared with the same one of the DSP-satellite. To control the entire surface of the Earth with 15-16 sec interval, including the polar regions, 4 SBIRS satellites located every 90 deg. along the equator are enough in GEO orbit. Since DSP-satellites provide the coverage of the Earth's surface to 83 deg. latitudes with a period of 50 sec, DSP-satellites should be 8. All the conclusions were made based on an analysis of photometric and coordinate observations using the simulation of the dynamics of their orbital behavior.

  19. Improvement of CMOS VLSI rad tolerance by processing technics

    International Nuclear Information System (INIS)

    Guyomard, D.; Desoutter, I.

    1986-01-01

    The following study concerns the development of integrated circuits for fields requiring only relatively low radiation tolerance levels, and especially for the civil spatial district area. Process modifications constitute our basic study. They have been carried into effects. Our work and main results are reported in this paper. Well known 2.5 and 3 μm CMOS technologies are under our concern. A first set of modifications enables us to double the cumulative dose tolerance of a 4 Kbit SRAM, keeping at the same time the same kind of damage. We obtain memories which tolerate radiation doses as high as 16 KRad(Si). Repetitivity of the results, linked to the quality assurance of this specific circuit, is reported here. A second set of modifications concerns the processing of gate array. In particular, the choice of the silicon substrate type, (epitaxy substrate), is under investigation. On the other hand, a complete study of a test vehicule allows us to accurately measure the rad tolerance of various components of the Cell library [fr

  20. An SEU analysis approach for error propagation in digital VLSI CMOS ASICs

    International Nuclear Information System (INIS)

    Baze, M.P.; Bartholet, W.G.; Dao, T.A.; Buchner, S.

    1995-01-01

    A critical issue in the development of ASIC designs is the ability to achieve first pass fabrication success. Unsuccessful fabrication runs have serious impact on ASIC costs and schedules. The ability to predict an ASICs radiation response prior to fabrication is therefore a key issue when designing ASICs for military and aerospace systems. This paper describes an analysis approach for calculating static bit error propagation in synchronous VLSI CMOS circuits developed as an aid for predicting the SEU response of ASIC's. The technique is intended for eventual application as an ASIC development simulation tool which can be used by circuit design engineers for performance evaluation during the pre-fabrication design process in much the same way that logic and timing simulators are used

  1. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-01-01

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193

  2. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm.

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-08-13

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  3. Radiation hardness tests with a demonstrator preamplifier circuit manufactured in silicon on sapphire (SOS) VLSI technology

    International Nuclear Information System (INIS)

    Bingefors, N.; Ekeloef, T.; Eriksson, C.; Paulsson, M.; Moerk, G.; Sjoelund, A.

    1992-01-01

    Samples of the preamplifier circuit, as well as of separate n and p channel transistors of the type contained in the circuit, were irradiated with gammas from a 60 Co source up to an integrated dose of 3 Mrad (30 kGy). The VLSI manufacturing technology used is the SOS4 process of ABB Hafo. A first analysis of the tests shows that the performance of the amplifier remains practically unaffected by the radiation for total doses up to 1 Mrad. At higher doses up to 3 Mrad the circuit amplification factor decreases by a factor between 4 and 5 whereas the output noise level remains unchanged. It is argued that it may be possible to reduce the decrease in amplification factor in future by optimizing the amplifier circuit design further. (orig.)

  4. Real time track finding in a drift chamber with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-01-01

    In a test setup, a hardware neural network determined track parameters of charged particles traversing a drift chamber. Voltages proportional to the drift times in 6 cells of the 3-layer chamber were inputs to the Intel ETANN neural network chip which had been trained to give the slope and intercept of tracks. We compare network track parameters to those obtained from off-line track fits. To our knowledge this is the first on-line application of a VLSI neural network to a high energy physics detector. This test explored the potential of the chip and the practical problems of using it in a real world setting. We compare the chip performance to a neural network simulation on a conventional computer. We discuss possible applications of the chip in high energy physics detector triggers. (orig.)

  5. A parallel VLSI architecture for a digital filter of arbitrary length using Fermat number transforms

    Science.gov (United States)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1982-01-01

    A parallel architecture for computation of the linear convolution of two sequences of arbitrary lengths using the Fermat number transform (FNT) is described. In particular a pipeline structure is designed to compute a 128-point FNT. In this FNT, only additions and bit rotations are required. A standard barrel shifter circuit is modified so that it performs the required bit rotation operation. The overlap-save method is generalized for the FNT to compute a linear convolution of arbitrary length. A parallel architecture is developed to realize this type of overlap-save method using one FNT and several inverse FNTs of 128 points. The generalized overlap save method alleviates the usual dynamic range limitation in FNTs of long transform lengths. Its architecture is regular, simple, and expandable, and therefore naturally suitable for VLSI implementation.

  6. Complex plume dynamics in the transition zone underneath the Hawaii hotspot: seismic imaging results

    Science.gov (United States)

    Cao, Q.; van der Hilst, R. D.; de Hoop, M. V.; Shim, S.

    2010-12-01

    In recent years, progress has been made in seismology to constrain the depth variations of the transition zone discontinuities, e.g. 410 km and 660 km discontinuities, which can be used to constrain the local temperature and chemistry profiles, and hence to infer the existences and morphology of mantle plumes. Taking advantage of the abundance of natural earthquake sources in western Pacific subduction zones and the many seismograph stations in the Americas, we used a generalized Radon transform (GRT), a high resolution inverse-scattering technique, of SS precursors to form 3-D images of the transition zone structures of a 30 degree by 40 degree area underneath Hawaii and the Hawaii-Emperor seamount chain. Rather than a simple mushroom-shape plume, our seismic images suggest complex plume dynamics interacting with the transition zone phase transitions, especially at the 660’ discontinuity. A conspicuous uplift of the 660 discontinuity in a region of 800km in diameter is observed to the west of Hawaii. No correspondent localized depression of the 410 discontinuity is found. This lack of correlation between and differences in lateral length scale of the topographies of the 410 and 660 km discontinuities are consistent with many geodynamical modeling results, in which a deep-mantle plume impinging on the transition zone, creating a pond of hot material underneath endothermic phase change at 660 km depth, and with secondary plumes connecting to the present-day hotspot at Earth’s surface. This more complex plume dynamics suggests that the complicated mass transport process across the transition zone should be taken into account when we try to link the geochemical observations of Hawaiian basalt geochemistry at the Earth’s surface to deep mantle domains. In addition to clear signals at 410km, 520km and 660km depth, the data also reveals rich structures near 350km depth and between 800 - 1000km depth, which may be regional, laterally intermittent scatter interfaces

  7. A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory.

    Science.gov (United States)

    Chicca, E; Badoni, D; Dante, V; D'Andreagiovanni, M; Salina, G; Carota, L; Fusi, S; Del Giudice, P

    2003-01-01

    Electronic neuromorphic devices with on-chip, on-line learning should be able to modify quickly the synaptic couplings to acquire information about new patterns to be stored (synaptic plasticity) and, at the same time, preserve this information on very long time scales (synaptic stability). Here, we illustrate the electronic implementation of a simple solution to this stability-plasticity problem, recently proposed and studied in various contexts. It is based on the observation that reducing the analog depth of the synapses to the extreme (bistable synapses) does not necessarily disrupt the performance of the device as an associative memory, provided that 1) the number of neurons is large enough; 2) the transitions between stable synaptic states are stochastic; and 3) learning is slow. The drastic reduction of the analog depth of the synaptic variable also makes this solution appealing from the point of view of electronic implementation and offers a simple methodological alternative to the technological solution based on floating gates. We describe the full custom analog very large-scale integration (VLSI) realization of a small network of integrate-and-fire neurons connected by bistable deterministic plastic synapses which can implement the idea of stochastic learning. In the absence of stimuli, the memory is preserved indefinitely. During the stimulation the synapse undergoes quick temporary changes through the activities of the pre- and postsynaptic neurons; those changes stochastically result in a long-term modification of the synaptic efficacy. The intentionally disordered pattern of connectivity allows the system to generate a randomness suited to drive the stochastic selection mechanism. We check by a suitable stimulation protocol that the stochastic synaptic plasticity produces the expected pattern of potentiation and depression in the electronic network.

  8. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    Science.gov (United States)

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights

  9. VLSI Design of a Variable-Length FFT/IFFT Processor for OFDM-Based Communication Systems

    Directory of Open Access Journals (Sweden)

    Jen-Chih Kuo

    2003-12-01

    Full Text Available The technique of {orthogonal frequency division multiplexing (OFDM} is famous for its robustness against frequency-selective fading channel. This technique has been widely used in many wired and wireless communication systems. In general, the {fast Fourier transform (FFT} and {inverse FFT (IFFT} operations are used as the modulation/demodulation kernel in the OFDM systems, and the sizes of FFT/IFFT operations are varied in different applications of OFDM systems. In this paper, we design and implement a variable-length prototype FFT/IFFT processor to cover different specifications of OFDM applications. The cached-memory FFT architecture is our suggested VLSI system architecture to design the prototype FFT/IFFT processor for the consideration of low-power consumption. We also implement the twiddle factor butterfly {processing element (PE} based on the {{coordinate} rotation digital computer (CORDIC} algorithm, which avoids the use of conventional multiplication-and-accumulation unit, but evaluates the trigonometric functions using only add-and-shift operations. Finally, we implement a variable-length prototype FFT/IFFT processor with TSMC 0.35 μm 1P4M CMOS technology. The simulations results show that the chip can perform (64-2048-point FFT/IFFT operations up to 80 MHz operating frequency which can meet the speed requirement of most OFDM standards such as WLAN, ADSL, VDSL (256∼2K, DAB, and 2K-mode DVB.

  10. SAFARI - RANDOMISED TRIAL ON COMPLEX THERAPY OF ARTERIAL HYPERTENSION AND DISLIPIDEMY. THE MAIN RESULTS

    Directory of Open Access Journals (Sweden)

    S. Y. Martsevich

    2016-01-01

    Full Text Available Aim. To evaluate possibility of complex pharmaceutical effect simultaneously on 2 risk factors – arterial hypertension (HT and hypercholesterolemia (HH in patients with high risk of cardiovascular complications (CVC.Material and methods. 101 patients with HT of 1-2 stage, HH and high risk of CVC (SCORE>5 were included in the study. Patients were randomized in 2 groups: active therapy group (ATG and control group (CG. ATG patients were actively treated for HT and HH control. The long-acting nifedipine (Nifecard XL, LEK 30 mg once daily (OD was prescribed as start antihypertensive drug. Hydrochlorothiazide 12,5 mg/day OD and bisiprolol 5 mg OD was added if antihypertensive effect was insufficient. Atorvastatin (Tulip, LEK 20-40 mg OD was prescribed for HH control. Management of CG patients was performed by doctors of out-patient clinics. The study duration was 12 weeks.Results. Systolic and diastolic blood pressure (BP levels in ATG patients were lower than these in CG patients. Target BP level was reached in 88,4% of ATG patients and only in 48,9% of CG patients. Cholesterol of low density lipoprotein (CH LPLD level was also lower in ATG patients than this in CG patients. Target CH LPLD level was reached in 37,2 % of ATG patients and in 8,3 % of CG patients. Relative risk of CVC was significantly lower in ATG patients than this in CG patients.Conclusion. SAFARI trial shows that effective pharmaceutical simultaneous control of 2 key risk factors, HT and HH, results in risk reduction of CVC.

  11. SAFARI - RANDOMISED TRIAL ON COMPLEX THERAPY OF ARTERIAL HYPERTENSION AND DISLIPIDEMY. THE MAIN RESULTS

    Directory of Open Access Journals (Sweden)

    S. Y. Martsevich

    2009-01-01

    Full Text Available Aim. To evaluate possibility of complex pharmaceutical effect simultaneously on 2 risk factors – arterial hypertension (HT and hypercholesterolemia (HH in patients with high risk of cardiovascular complications (CVC.Material and methods. 101 patients with HT of 1-2 stage, HH and high risk of CVC (SCORE>5 were included in the study. Patients were randomized in 2 groups: active therapy group (ATG and control group (CG. ATG patients were actively treated for HT and HH control. The long-acting nifedipine (Nifecard XL, LEK 30 mg once daily (OD was prescribed as start antihypertensive drug. Hydrochlorothiazide 12,5 mg/day OD and bisiprolol 5 mg OD was added if antihypertensive effect was insufficient. Atorvastatin (Tulip, LEK 20-40 mg OD was prescribed for HH control. Management of CG patients was performed by doctors of out-patient clinics. The study duration was 12 weeks.Results. Systolic and diastolic blood pressure (BP levels in ATG patients were lower than these in CG patients. Target BP level was reached in 88,4% of ATG patients and only in 48,9% of CG patients. Cholesterol of low density lipoprotein (CH LPLD level was also lower in ATG patients than this in CG patients. Target CH LPLD level was reached in 37,2 % of ATG patients and in 8,3 % of CG patients. Relative risk of CVC was significantly lower in ATG patients than this in CG patients.Conclusion. SAFARI trial shows that effective pharmaceutical simultaneous control of 2 key risk factors, HT and HH, results in risk reduction of CVC.

  12. Preliminary results from an integrated, multi-parameter, experiment at the Santiaguito lava dome complex, Guatemala

    Science.gov (United States)

    De Angelis, S.; Rietbrock, A.; Lavallée, Y.; Lamb, O. D.; Lamur, A.; Kendrick, J. E.; Hornby, A. J.; von Aulock, F. W.; Chigna, G.

    2016-12-01

    Understanding the complex processes that drive volcanic unrest is crucial to effective risk mitigation. Characterization of these processes, and the mechanisms of volcanic eruptions, is only possible when high-resolution geophysical and geological observations are available over comparatively long periods of time. In November 2014, the Liverpool Earth Observatory, UK, in collaboration with the Instituto Nacional de Sismologia, Meteorologia e Hidrologia (INSIVUMEH), Guatemala, established a multi-parameter geophysical network at Santiaguito, one of the most active volcanoes in Guatemala. Activity at Santiaguito throughout the past decade, until the summer of 2015, was characterized by nearly continuous lava dome extrusion accompanied by frequent and regular small-to-moderate gas or gas-and-ash explosions. Over the past two years our network collected a wealth of seismic, acoustic and deformation data, complemented by campaign visual and thermal infrared measurements, and rock and ash samples. Here we present preliminary results from the analysis of this unique dataset. Using acoustic and thermal data collected during 2014-2015 we were able to assess volume fractions of ash and gas in the eruptive plumes. The small proportion of ash inferred in the plumes confirms estimates from previous, independent, studies, and suggests that these events did not involve significant magma fragmentation in the conduit. The results also agree with the suggestion that sacrificial fragmentation along fault zones in the conduit region, due to shear-induced thermal vesiculation, may be at the origin of such events. Finally, starting in the summer of 2015, our experiment captured the transition to a new phase of activity characterized by vigorous vulcanian-style explosions producing large, ash-rich, plumes and frequent hazardous pyroclastic flows, as well as the formation a large summit crater. We present evidence of this transition in the geophysical and geological data, and discuss its

  13. The complexity of Orion: an ALMA view. I. Data and first results

    Science.gov (United States)

    Pagani, L.; Favre, C.; Goldsmith, P. F.; Bergin, E. A.; Snell, R.; Melnick, G.

    2017-07-01

    Context. We wish to improve our understanding of the Orion central star formation region (Orion-KL) and disentangle its complexity. Aims: We collected data with ALMA during cycle 2 in 16 GHz of total bandwidth spread between 215.1 and 252.0 GHz with a typical sensitivity of 5 mJy/beam (2.3 mJy/beam from 233.4 to 234.4 GHz) and a typical beam size of 1.̋7 × 1.̋0 (average position angle of 89°). We produced a continuum map and studied the emission lines in nine remarkable infrared spots in the region including the hot core and the compact ridge, plus the recently discovered ethylene glycol peak. Methods: We present the data, and report the detection of several species not previously seen in Orion, including n- and I-propyl cyanide (C3H7CN), and the tentative detection of a number of other species including glycolaldehyde (CH2(OH)CHO). The first detections of gGg' ethylene glycol (gGg' (CH2OH)2) and of acetic acid (CH3COOH) in Orion are presented in a companion paper. We also report the possible detection of several vibrationally excited states of cyanoacetylene (HC3N), and of its 13C isotopologues. We were not able to detect the 16O18O line predicted by our detection of O2 with Herschel, due to blending with a nearby line of vibrationally excited ethyl cyanide. We do not confirm the tentative detection of hexatriynyl (C6H) and cyanohexatriyne (HC7N) reported previously, or of hydrogen peroxide (H2O2) emission. Results: We report a complex velocity structure only partially revealed before. Components as extreme as -7 and +19 km s-1 are detected inside the hot region. Thanks to different opacities of various velocity components, in some cases we can position these components along the line of sight. We propose that the systematically redshifted and blueshifted wings of several species observed in the northern part of the region are linked to the explosion that occurred 500 yr ago. The compact ridge, noticeably farther south displays extremely narrow lines ( 1 km s

  14. VLSI Research

    Science.gov (United States)

    1984-04-01

    Interpretation of IMMEDIATE fields of instructions (except ldhi ): W (c) (d) (e) sssssssssssss s imml9 sssssssssssssssssss...s imml3 Destination REGISTER of a LDHI instruction: imml9 0000000000000 Data in REGISTERS when operated upon: 32-bit quantity...Oll x l OOOO OOOl calli sll OOlO getpsw sra xxzOOll getlpc srl OlOO putpsw ldhi OlOl and zzzOllO or ldxw stxw Olll xor

  15. Mental disturbances and perceived complexity of nursing care in medical inpatients : results from a European study

    NARCIS (Netherlands)

    De Jonge, P; Zomerdijk, MM; Huyse, FJ; Fink, P; Herzog, T; Lobo, A; Slaets, JPJ; Arolt, [No Value; Balogh, N; Cardoso, G; Rigatelli, M

    2001-01-01

    Aims and objectives. The relationship between mental disturbances-anxiety and depression, somatization and alcohol abuse-on admission to internal medicine units and perceived complexity of care as indicated by the nurse at discharge was studied. The goal Was to Study the utility of short screeners

  16. Mental disturbances and perceived complexity of nursing care in medical inpatients : results from a European study

    NARCIS (Netherlands)

    De Jonge, P; Zomerdijk, MM; Huyse, FJ; Fink, P; Herzog, T; Lobo, A; Slaets, JPJ; Arolt, [No Value; Balogh, N; Cardoso, G; Rigatelli, M

    Aims and objectives. The relationship between mental disturbances-anxiety and depression, somatization and alcohol abuse-on admission to internal medicine units and perceived complexity of care as indicated by the nurse at discharge was studied. The goal Was to Study the utility of short screeners

  17. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits.

    Directory of Open Access Journals (Sweden)

    Danielle S Bassett

    2010-04-01

    Full Text Available Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.

  18. An Approach to Implementing A Threshold Adjusting Mechanism in Very Complex Negotiations : A Preliminary Result

    OpenAIRE

    Fujita, Katsuhide; Ito, Takayuki; Hattori, Hiromitsu; Klein, Mark

    2007-01-01

    In this paper, we propose a threshold adjusting mechanism in very complex negotiations among software agents. The proposed mechanism can facilitate agents to reach an agreement while keeping their private information as much as possible. Multi-issue negotiation protocols have been studied widely and represent a promising field since most negotiation problems in the real world involve interdependent multiple issues. We have proposed negotiation protocols where a bidding-based mechanism is used...

  19. General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 12 (2003), s. 2727-2778 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : computational power * computational complexity * perceptrons * radial basis functions * spiking neurons * feedforward networks * reccurent networks * probabilistic computation * analog computation Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  20. Promising results after single-stage reconstruction of the nipple and areola complex

    DEFF Research Database (Denmark)

    Børsen-Koch, Mikkel; Bille, Camilla; Thomsen, Jørn B

    2013-01-01

    Introduction: Reconstruction of the nipple-areola complex (NAC) traditionally marks the end of breast reconstruction. Several different surgical techniques have been described, but most are staged procedures. This paper describes a simple single-stage approach. Material and Methods: We used...... reconstruction was 43 min. (30-50 min.). Conclusion: This simple single-stage NAC reconstruction seems beneficial for both patient and surgeon as it seems to be associated with faster reconstruction and reduced procedure-related time without compromising the aesthetic outcome or the morbidity associated...

  1. “Bolshie Klyuchishi” (Ulyanovsk Oblast as a New Archaeological Complex: Preliminary Results

    Directory of Open Access Journals (Sweden)

    Vorobeva Elena E.

    2016-03-01

    Full Text Available The authors introduce for discussion materials of archaeological studies conducted by the team of the Volga Archaeological Expedition of the Mari State University in Ulyanovsk Oblast of the Russian Federation in 2010. Two of the studied archaeological sites seem to be most interesting: they are situated near Bolshie Klyuchishi village (Ulyanovsk District, Ulyanovsk Oblast. Archaeological materials collected during the excavations of these settlements have a very broad time span, which allows suggesting that Bolshie Klyuchishi is a multilayered archaeological complex. Both settlements yielded the Srubnaya culture handmade ceramics of 16th – 13th centuries BC. Moreover, Bolshie Klyuchishi-7 contained items of iron and slag, and Bolshie Klyuchishi-8 yielded sherds of 13th – 14th centuries wheel- made Bulgarian ceramics.

  2. A complex-plane strategy for computing rotating polytropic models - Numerical results for strong and rapid differential rotation

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1990-01-01

    In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs

  3. Long-Term Results After Simple Versus Complex Stenting of Coronary Artery Bifurcation Lesions Nordic Bifurcation Study 5-Year Follow-Up Results

    DEFF Research Database (Denmark)

    Maeng, M.; Holm, N. R.; Erglis, A.

    2013-01-01

    Objectives This study sought to report the 5-year follow-up results of the Nordic Bifurcation Study. Background Randomized clinical trials with short-term follow-up have indicated that coronary bifurcation lesions may be optimally treated using the optional side branch stenting strategy. Methods...... complex strategy of planned stenting of both the main vessel and the side branch. (C) 2013 by the American College of Cardiology Foundation...

  4. The Complex Outgassing of Comets and the Resulting Coma, a Direct Simulation Monte-Carlo Approach

    Science.gov (United States)

    Fougere, Nicolas

    During its journey, when a comet gets within a few astronomical units of the Sun, solar heating liberates gases and dust from its icy nucleus forming a rarefied cometary atmosphere, the so-called coma. This tenuous atmosphere can expand to distances up to millions of kilometers representing orders of magnitude larger than the nucleus size. Most of the practical cases of coma studies involve the consideration of rarefied gas flows under non-LTE conditions where the hydrodynamics approach is not valid. Then, the use of kinetic methods is required to properly study the physics of the cometary coma. The Direct Simulation Monte-Carlo (DSMC) method is the method of choice to solve the Boltzmann equation, giving the opportunity to study the cometary atmosphere from the inner coma where collisions dominate and is in thermodynamic equilibrium to the outer coma where densities are lower and free flow conditions are verified. While previous studies of the coma used direct sublimation from the nucleus for spherically symmetric 1D models, or 2D models with a day/night asymmetry, recent observations of comets showed the existence of local small source areas such as jets, and extended sources via sublimating icy grains, that must be included into cometary models for a realistic representation of the physics of the coma. In this work, we present, for the first time, 1D, 2D, and 3D models that can take into account the full effects of conditions with more complex sources of gas with jets and/or icy grains. Moreover, an innovative work in a full 3D description of the cometary coma using a kinetic method with a realistic nucleus and outgassing is demonstrated. While most of the physical models used in this study had already been developed, they are included in one self-consistent coma model for the first time. The inclusion of complex cometary outgassing processes represents the state-of-the-art of cometary coma modeling. This provides invaluable information about the coma by

  5. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures

    International Nuclear Information System (INIS)

    Puncher, M.; Birchall, A.; Bull, R. K.

    2012-01-01

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)

  6. Further Sr and Nd isotopic results from peridotites of the Ronda Ultramafic Complex

    International Nuclear Information System (INIS)

    Reisberg, L.; Zindler, A.

    1989-01-01

    Clinopyroxenes derived from peridotites of the spinel and garnet facies of the Ronda Ultramafic Complex yield Sr and Nd isotopic ratios which extend the range of compositions found in the massif to values as depleted as 0.70205 for Sr and 0.51363 for Nd. Large-amplitude, short-wavelength isotopic variations are found to be uniquitous throughout the massif. In the garnet facies, some of these variations are shown to be produced by the tectonic disaggregation of mafic layers in an isotopically depleted peridotite matrix. Ages obtained from garnet-clinopyroxene Sm-Nd isochrons (about 22 m.y.) agree with previous determinations of the time of crustal emplacement. In the plagioclase facies, where the Sr and Nd isotopic compositions have been very strongly affected by recent cryptic metasomatism, detailed study of one sample reveals that intermineral Nd isotopic equilibrium exists between clinopyroxene, orthopyroxene, and plagioclase. This indicates that the metasomatism occurred at high temperatures, and thus probably within the mantle. A rough correlation between 143 Nd/ 144 Nd and 147 Sm/ 144 N, with an apparent 'age' of 1.3 b.y. and an initial ε Nd (0) value of +6.0, is observed among clinopyroxenes derived from river sediments from throughout the massif. This age is interpreted as the time that the massif left the convecting mantle and became incorporated into the sub-continental lithosphere. (orig.)

  7. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Directory of Open Access Journals (Sweden)

    Ying-Lun Chen

    2015-08-01

    Full Text Available A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO, and the feature extraction is carried out by the generalized Hebbian algorithm (GHA. To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  8. Prototype architecture for a VLSI level zero processing system. [Space Station Freedom

    Science.gov (United States)

    Shi, Jianfei; Grebowsky, Gerald J.; Horner, Ward P.; Chesney, James R.

    1989-01-01

    The prototype architecture and implementation of a high-speed level zero processing (LZP) system are discussed. Due to the new processing algorithm and VLSI technology, the prototype LZP system features compact size, low cost, high processing throughput, and easy maintainability and increased reliability. Though extensive control functions have been done by hardware, the programmability of processing tasks makes it possible to adapt the system to different data formats and processing requirements. It is noted that the LZP system can handle up to 8 virtual channels and 24 sources with combined data volume of 15 Gbytes per orbit. For greater demands, multiple LZP systems can be configured in parallel, each called a processing channel and assigned a subset of virtual channels. The telemetry data stream will be steered into different processing channels in accordance with their virtual channel IDs. This super system can cope with a virtually unlimited number of virtual channels and sources. In the near future, it is expected that new disk farms with data rate exceeding 150 Mbps will be available from commercial vendors due to the advance in disk drive technology.

  9. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Science.gov (United States)

    McEwan, Alistair; van Schaik, André

    2003-12-01

    The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a) rate level functions for onset and steady-state response, (b) recovery after masking, (c) additivity, (d) two-component adaptation, (e) phase locking, (f) recovery of spontaneous activity, and (g) computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  10. Robust working memory in an asynchronously spiking neural network realized in neuromorphic VLSI

    Directory of Open Access Journals (Sweden)

    Massimiliano eGiulioni

    2012-02-01

    Full Text Available We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory of integrate-and-fire (LIF neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of ‘high’ and ‘low’-firing activity. Depending on the overall excitability, transitions to the ‘high’ state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the ‘high’ state retains a working memory of a stimulus until well after its release. In the latter case, ‘high’ states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated corrupted ‘high’ states comprising neurons of both excitatory populations. Within a basin of attraction, the network dynamics corrects such states and re-establishes the prototypical ‘high’ state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  11. Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI.

    Science.gov (United States)

    Giulioni, Massimiliano; Camilleri, Patrick; Mattia, Maurizio; Dante, Vittorio; Braun, Jochen; Del Giudice, Paolo

    2011-01-01

    We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of "high" and "low"-firing activity. Depending on the overall excitability, transitions to the "high" state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the "high" state retains a "working memory" of a stimulus until well after its release. In the latter case, "high" states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated "corrupted" "high" states comprising neurons of both excitatory populations. Within a "basin of attraction," the network dynamics "corrects" such states and re-establishes the prototypical "high" state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  12. Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI.

    Science.gov (United States)

    Yu, T; Sejnowski, T J; Cauwenberghs, G

    2011-10-01

    We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris-Lecar and Hodgkin-Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces.

  13. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Directory of Open Access Journals (Sweden)

    Alistair McEwan

    2003-06-01

    Full Text Available The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a rate level functions for onset and steady-state response, (b recovery after masking, (c additivity, (d two-component adaptation, (e phase locking, (f recovery of spontaneous activity, and (g computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  14. A neuromorphic VLSI device for implementing 2-D selective attention systems.

    Science.gov (United States)

    Indiveri, G

    2001-01-01

    Selective attention is a mechanism used to sequentially select and process salient subregions of the input space, while suppressing inputs arriving from nonsalient regions. By processing small amounts of sensory information in a serial fashion, rather than attempting to process all the sensory data in parallel, this mechanism overcomes the problem of flooding limited processing capacity systems with sensory inputs. It is found in many biological systems and can be a useful engineering tool for developing artificial systems that need to process in real-time sensory data. In this paper we present a neuromorphic hardware model of a selective attention mechanism implemented on a very large scale integration (VLSI) chip, using analog circuits. The chip makes use of a spike-based representation for receiving input signals, transmitting output signals and for shifting the selection of the attended input stimulus over time. It can be interfaced to neuromorphic sensors and actuators, for implementing multichip selective attention systems. We describe the characteristics of the circuits used in the architecture and present experimental data measured from the system.

  15. Implementation of neuromorphic systems: from discrete components to analog VLSI chips (testing and communication issues).

    Science.gov (United States)

    Dante, V; Del Giudice, P; Mattia, M

    2001-01-01

    We review a series of implementations of electronic devices aiming at imitating to some extent structure and function of simple neural systems, with particular emphasis on communication issues. We first provide a short overview of general features of such "neuromorphic" devices and the implications of setting up "tests" for them. We then review the developments directly related to our work at the Istituto Superiore di Sanità (ISS): a pilot electronic neural network implementing a simple classifier, autonomously developing internal representations of incoming stimuli; an output network, collecting information from the previous classifier and extracting the relevant part to be forwarded to the observer; an analog, VLSI (very large scale integration) neural chip implementing a recurrent network of spiking neurons and plastic synapses, and the test setup for it; a board designed to interface the standard PCI (peripheral component interconnect) bus of a PC with a special purpose, asynchronous bus for communication among neuromorphic chips; a short and preliminary account of an application-oriented device, taking advantage of the above communication infrastructure.

  16. A Single Chip VLSI Implementation of a QPSK/SQPSK Demodulator for a VSAT Receiver Station

    Science.gov (United States)

    Kwatra, S. C.; King, Brent

    1995-01-01

    This thesis presents a VLSI implementation of a QPSK/SQPSK demodulator. It is designed to be employed in a VSAT earth station that utilizes the FDMA/TDM link. A single chip architecture is used to enable this chip to be easily employed in the VSAT system. This demodulator contains lowpass filters, integrate and dump units, unique word detectors, a timing recovery unit, a phase recovery unit and a down conversion unit. The design stages start with a functional representation of the system by using the C programming language. Then it progresses into a register based representation using the VHDL language. The layout components are designed based on these VHDL models and simulated. Component generators are developed for the adder, multiplier, read-only memory and serial access memory in order to shorten the design time. These sub-components are then block routed to form the main components of the system. The main components are block routed to form the final demodulator.

  17. Digital VLSI design with Verilog a textbook from Silicon Valley Polytechnic Institute

    CERN Document Server

    Williams, John Michael

    2014-01-01

    This book is structured as a step-by-step course of study along the lines of a VLSI integrated circuit design project.  The entire Verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer-deserializer, including synthesizable PLLs.  The author includes everything an engineer needs for in-depth understanding of the Verilog language:  Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided in the downloadable files that accompany the book.  For readers with access to appropriate electronic design tools, all solutions can be developed, simulated, and synthesized as described in the book.   A partial list of design topics includes design partitioning, hierarchy decomposition, safe coding styles, back annotation, wrapper modules, concurrency, race conditions, assertion-based verification, clock synchronization, and design for test.   A concluding presentation of special topics inclu...

  18. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  19. Design of 10Gbps optical encoder/decoder structure for FE-OCDMA system using SOA and opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Hwang, Seow; Alameh, Kamal

    2008-01-21

    In this paper we propose and experimentally demonstrate a reconfigurable 10Gbps frequency-encoded (1D) encoder/decoder structure for optical code division multiple access (OCDMA). The encoder is constructed using a single semiconductor optical amplifier (SOA) and 1D reflective Opto-VLSI processor. The SOA generates broadband amplified spontaneous emission that is dynamically sliced using digital phase holograms loaded onto the Opto-VLSI processor to generate 1D codewords. The selected wavelengths are injected back into the same SOA for amplifications. The decoder is constructed using single Opto-VLSI processor only. The encoded signal can successfully be retrieved at the decoder side only when the digital phase holograms of the encoder and the decoder are matched. The system performance is measured in terms of the auto-correlation and cross-correlation functions as well as the eye diagram.

  20. Preliminary results from the Los Alamos TA54 complex terrain Atmospheric Transport Study (ATS)

    Energy Technology Data Exchange (ETDEWEB)

    Vold, E.; Chan, M.; Sanders, L.

    1995-09-01

    The Los Alamos National Laboratory (LANL) Low-Level Radioactive Waste (LLRW) disposal site at TA54, Area G la located on a mesa top amidst a complex terrain of finger like mesas typically 30 motors or more In height above canyons of widths varying from 100 to 300 motors. Atmospheric dispersion from this site is of concern for routine operations and for potential Incidents during waste retrieval operations. Indian lands are located In the dominant downwind direction within 500 m from the site and provide further incentive to understand the potential and actual impacts of waste disposal operations. The permanent network of meteorological towers at LANL have been located primarily at mesa-top locations to coincide with most laboratory facilities and as such do not resolve the effects of channeling in the canyons and the influence this has on potential surface releases. An Atmospheric Transport Study (ATS) was initiated to better understand the wind flow fields and dispersion from the LANL Waste Storage and Disposal facilities at TA-54, Area G. As part of this effort, a series of six portable meteorological towers were sited in the vicinity of Area G, two at mesa top locations, one just east of the site where the mesas have dissipated to mild ridges, and three in the canyons adjacent to the disposal site mesa as indicated on the topographic representation of the local terrain. Since 1994, the towers have collected horizontal wind velocities, pressure, temperature, relative humidity and a radiation gamma reading every fifteen minutes. The data bass is being analyzed for trends and to provide a basis for comparison to computational modeling efforts to predict the flow fields.

  1. Preliminary results from the Los Alamos TA54 complex terrain Atmospheric Transport Study (ATS)

    International Nuclear Information System (INIS)

    Vold, E.; Chan, M.; Sanders, L.

    1995-01-01

    The Los Alamos National Laboratory (LANL) Low-Level Radioactive Waste (LLRW) disposal site at TA54, Area G la located on a mesa top amidst a complex terrain of finger like mesas typically 30 motors or more In height above canyons of widths varying from 100 to 300 motors. Atmospheric dispersion from this site is of concern for routine operations and for potential Incidents during waste retrieval operations. Indian lands are located In the dominant downwind direction within 500 m from the site and provide further incentive to understand the potential and actual impacts of waste disposal operations. The permanent network of meteorological towers at LANL have been located primarily at mesa-top locations to coincide with most laboratory facilities and as such do not resolve the effects of channeling in the canyons and the influence this has on potential surface releases. An Atmospheric Transport Study (ATS) was initiated to better understand the wind flow fields and dispersion from the LANL Waste Storage and Disposal facilities at TA-54, Area G. As part of this effort, a series of six portable meteorological towers were sited in the vicinity of Area G, two at mesa top locations, one just east of the site where the mesas have dissipated to mild ridges, and three in the canyons adjacent to the disposal site mesa as indicated on the topographic representation of the local terrain. Since 1994, the towers have collected horizontal wind velocities, pressure, temperature, relative humidity and a radiation gamma reading every fifteen minutes. The data bass is being analyzed for trends and to provide a basis for comparison to computational modeling efforts to predict the flow fields

  2. Potassium-argon age determination of crystalline complexes of West Carpathians and preliminary result interpretation

    International Nuclear Information System (INIS)

    Bagdasaryan, G.P.; Gukasyan, P.Kh.; Veselsky, I.

    1977-01-01

    Results obtained using the K-Ar method and comparing them with the results obtained by radiometric and palaeontological methods in general confirm the palaeozoic age of crystalline rocks in the Western Carpathians. The existence of Precambrian rocks in this region may be assumed although there is still no geochronological evidence to this. The solution of this problem will reguire also Rb-Sr isochronous and U-Th-Pb absolute dating. (author)

  3. CT demonstration of chicken trachea resulting from complete cartilaginous rings of the trachea in ring-sling complex

    International Nuclear Information System (INIS)

    Calcagni, Giulio; Bonnet, Damien; Sidi, Daniel; Brunelle, Francis; Vouhe, Pascal; Ou, Phalla

    2008-01-01

    We report a 10-month-old infant who presented with tetralogy of Fallot and respiratory disease in whom the suspicion of a ring-sling complex was confirmed by high-resolution CT. CT demonstrated the typical association of left pulmonary artery sling and the ''chicken trachea'' resulting from complete cartilaginous rings of the trachea. (orig.)

  4. CT demonstration of chicken trachea resulting from complete cartilaginous rings of the trachea in ring-sling complex

    Energy Technology Data Exchange (ETDEWEB)

    Calcagni, Giulio; Bonnet, Damien; Sidi, Daniel [University Paris Descartes, Department of Paediatric Cardiology, Hopital Necker-Enfants Malades, AP-HP, Paris (France); Brunelle, Francis [University Paris Descartes, Department of Paediatric Radiology, Hopital Necker-Enfants Malades, AP-HP, Paris Cedex 15 (France); Vouhe, Pascal [University Paris Descartes, Department of Paediatric Cardiovascular Surgery, Hopital Necker-Enfants Malades, AP-HP, Paris (France); Ou, Phalla [University Paris Descartes, Department of Paediatric Cardiology, Hopital Necker-Enfants Malades, AP-HP, Paris (France); University Paris Descartes, Department of Paediatric Radiology, Hopital Necker-Enfants Malades, AP-HP, Paris Cedex 15 (France)

    2008-07-15

    We report a 10-month-old infant who presented with tetralogy of Fallot and respiratory disease in whom the suspicion of a ring-sling complex was confirmed by high-resolution CT. CT demonstrated the typical association of left pulmonary artery sling and the ''chicken trachea'' resulting from complete cartilaginous rings of the trachea. (orig.)

  5. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    Science.gov (United States)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal

  6. Complex dynamics of memristive circuits: Analytical results and universal slow relaxation

    Science.gov (United States)

    Caravelli, F.; Traversa, F. L.; Di Ventra, M.

    2017-02-01

    Networks with memristive elements (resistors with memory) are being explored for a variety of applications ranging from unconventional computing to models of the brain. However, analytical results that highlight the role of the graph connectivity on the memory dynamics are still few, thus limiting our understanding of these important dynamical systems. In this paper, we derive an exact matrix equation of motion that takes into account all the network constraints of a purely memristive circuit, and we employ it to derive analytical results regarding its relaxation properties. We are able to describe the memory evolution in terms of orthogonal projection operators onto the subspace of fundamental loop space of the underlying circuit. This orthogonal projection explicitly reveals the coupling between the spatial and temporal sectors of the memristive circuits and compactly describes the circuit topology. For the case of disordered graphs, we are able to explain the emergence of a power-law relaxation as a superposition of exponential relaxation times with a broad range of scales using random matrices. This power law is also universal, namely independent of the topology of the underlying graph but dependent only on the density of loops. In the case of circuits subject to alternating voltage instead, we are able to obtain an approximate solution of the dynamics, which is tested against a specific network topology. These results suggest a much richer dynamics of memristive networks than previously considered.

  7. The modality and results of complex treatment of extended retinoblastoma in children

    International Nuclear Information System (INIS)

    Belkina, B.M.; Durnov, L.A.; Polyakov, V.G.; Goldobenko, G.V.; Glekov, I.V.; Ushakova, T.L.

    1997-01-01

    Analysis of the results of combined treatment of retinoblastoma in children according to the program developed in the Scientific and Research Institute of Pediatric Oncology and Hematology is performed. The treatment program permits to avoid in many cases the unjustified removal of eye. Combination of treatment methods (surgery, radiotherapy and chemotherapy) and their sequence depends on the classification by stages of retinoblastoma development according to the TNM system. Five year survival in case of monoretinoblastoma with surgical operation at the first stage and without is 92% and 82% corresponding while in case of double retinoblastoma - 83% and 84%

  8. On the use of Empirical Data to Downscale Non-scientific Scepticism About Results From Complex Physical Based Models

    Science.gov (United States)

    Germer, S.; Bens, O.; Hüttl, R. F.

    2008-12-01

    The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.

  9. Digital VLSI systems design a design manual for implementation of projects on FPGAs and ASICs using Verilog

    CERN Document Server

    Ramachandran, S

    2007-01-01

    Digital VLSI Systems Design is written for an advanced level course using Verilog and is meant for undergraduates, graduates and research scholars of Electrical, Electronics, Embedded Systems, Computer Engineering and interdisciplinary departments such as Bio Medical, Mechanical, Information Technology, Physics, etc. It serves as a reference design manual for practicing engineers and researchers as well. Diligent freelance readers and consultants may also start using this book with ease. The book presents new material and theory as well as synthesis of recent work with complete Project Designs

  10. State-of-the-art assessment of testing and testability of custom LSI/VLSI circuits. Volume 8: Fault simulation

    Science.gov (United States)

    Breuer, M. A.; Carlan, A. J.

    1982-10-01

    Fault simulation is widely used by industry in such applications as scoring the fault coverage of test sequences and construction of fault dictionaries. For use in testing VLSI circuits a simulator is evaluated by its accuracy, i.e., modelling capability. To be accurate simulators must employ multi-valued logic in order to represent unknown signal values, impedance, signal transitions, etc., circuit delays such as transport rise/fall, inertial, and the fault modes it is capable of handling. Of the three basic fault simulators now in use (parallel, deductive and concurrent) concurrent fault simulation appears most promising.

  11. [Chronic complex tinnitus: therapeutic results of inpatient treatment in a tinnitus clinic].

    Science.gov (United States)

    Hesse, G; Rienhoff, N K; Nelting, M; Laubert, A

    2001-09-01

    In-patient treatment of patients with chronic tinnitus is necessary only when these patients have a severe psychosomatic co-morbidity and suffer severely. However this therapeutic approach has to be supervised and evaluated properly. We present data and results of 1841 patients suffering from chronic tinnitus. Due to the severity of the symptom and psycho-neurotic side effects in-patient treatment was necessary. Therapy lasted 5 - 6 weeks, the main aspect was an intensive psychotherapeutic evaluation and stabilisation next to retraining and habituation programmes. Relaxation techniques were taught. Patients suffered from their tinnitus more than six month; 95 % further suffered from hearing-loss, mainly in high frequencies. The study evaluates results of patients from October 1994 until June 2000. Basis of the study was the evaluation of a specific tinnitus-questionnaire (TQ), published by Hallam in the UK and translated by Goebel and Hiller in Germany. Data was recorded at registration in our clinic, 4 - 6 months later during admission and at the end of the therapy. Final data was gained during a special meeting or questioning 6 months after dismissal from the clinic. Patients that suffered most showed the greatest improvement; directly after therapy there was a highly significant improvement in the TQ for an average of 13.01 points. Highly significant improvements were found in all the TQ-subscales respectively. Only 10 % of the patients did not show any improvement at all. Therapy of most severe cases of chronic tinnitus is possible, using an integrated concept of otologic and psychosomatic treatments. With large numbers of patients and sufficient data a thorough and necessary evaluation of this therapy can be achieved.

  12. Maintenance after a complex orthoperio treatment in a case of generalized aggressive periodontitis: 7-year result.

    Science.gov (United States)

    Zafiropoulos, Gregory-George; di Prisco, Manuela Occipite; Deli, Giorgio; Hoffmann, Oliver; Kasaj, Adrian

    2010-10-01

    Generalized aggressive periodontitis (GAgP) encompasses a distinct type of periodontal disease exhibiting much more rapid periodontal tissue destruction than chronic periodontitis. The best method for management of GAgP may include the use of both regenerative periodontal techniques and the administration of systemic antibiotics. The treatment of a case of GAgP over a period of 6.7 years is presented in this case report. Initial periodontal therapy (week 1- 32) consisted of supragingival plaque control and three appointments of scaling and root planing. Based on the periodontal pathogens isolated (5 species), the patient also received metronidazole plus amoxicillin for one week, followed 10 weeks later by metronidazole plus amoxicillin/clavulanate for one week. The patient was put on regular supportive periodontal therapy (SPT) thereafter. Orthodontic treatment was performed after completion of the initial therapy for 96 weeks. Measurements of clinical attachment level, bleeding on probing and plaque index were obtained at every examination. Antimicrobial and mechanical treatment resulted in eradication of all periopathogens and significantly improved all clinical parameters. During orthodontic treatment and active maintenance, there was no relapse of GAgP. The patient participated in SPT for 194 weeks and thereafter decided to discontinue SPT. Twenty-four months later a relapse of GAgP was diagnosed and all teeth had to be extracted. These results indicate that a combined mechanical and antimicrobial treatment approach can lead to consistent resolution of GAgP. Further studies including a larger number of cases are warranted to validate these findings.

  13. Sorption of phosphate onto calcite; results from batch experiments and surface complexation modeling

    DEFF Research Database (Denmark)

    Sø, Helle Ugilt; Postma, Dieke; Jakobsen, Rasmus

    2011-01-01

    The adsorption of phosphate onto calcite was studied in a series of batch experiments. To avoid the precipitation of phosphate-containing minerals the experiments were conducted using a short reaction time (3h) and low concentrations of phosphate (⩽50μM). Sorption of phosphate on calcite was stud......The adsorption of phosphate onto calcite was studied in a series of batch experiments. To avoid the precipitation of phosphate-containing minerals the experiments were conducted using a short reaction time (3h) and low concentrations of phosphate (⩽50μM). Sorption of phosphate on calcite...... of a high degree of super-saturation with respect to hydroxyapatite (SIHAP⩽7.83). The amount of phosphate adsorbed varied with the solution composition, in particular, adsorption increases as the CO32- activity decreases (at constant pH) and as pH increases (at constant CO32- activity). The primary effect...... of ionic strength on phosphate sorption onto calcite is its influence on the activity of the different aqueous phosphate species. The experimental results were modeled satisfactorily using the constant capacitance model with >CaPO4Ca0 and either >CaHPO4Ca+ or >CaHPO4- as the adsorbed surface species...

  14. Results of Rb-Sr dating of metamorphic rocks of crystalline complexes of Male Karpaty Mts

    International Nuclear Information System (INIS)

    Bagdasaryan, G.P.; Gukasyan, P.Kh.; Cambel, B.; Veselsky, J.

    1983-01-01

    The paper follows up on a recently published paper on Rb-Sr isochrone dating of granitoid rocks of the Male Karpaty Mts. Data are given on comparative statistical analysis of isochrones obtained for the Bratislava and Modra massifs (isochrone of the latter is complemented with the analyses of two new samples) and the results of age determination of metasedimentary rocks of the Pezinok-Pernek zone and the Bratislava area by the Rb-Sr isochrone. Regression analysis shows that there is no statistically significant difference between the age of the Bratislava massif (347+-4 m.y.) and the Modra massif (326+-22 m.y.) and between their initial ratios 87 Sr/ 86 Sr (i.e., they are synchronous, having the same magma source) which makes it possible to calculate uniform value for age. Whole-rock samples of metamorphic and crystalline schists (gneisses) of the Male Karpaty Mts. also determine the isochrone corresponding to the age 387+-38 m.y. (2σ) and initial ratio ( 87 Sr/ 86 Sr)=0.7100+-0.00O8 (2σ). Rb-Sr isotope analyses of several pairs of biotite-crystalline schist (from which biotite was separated) point out that redistribution of Sr isotopes among the mineral phases of rocks takes place during the periplutonic metamorphism, while the whole-rock samples remain chemically closed systems. (author)

  15. Age, stress, and emotional complexity: results from two studies of daily experiences.

    Science.gov (United States)

    Scott, Stacey B; Sliwinski, Martin J; Mogle, Jacqueline A; Almeida, David M

    2014-09-01

    Experiencing positive and negative emotions together (i.e., co-occurrence) has been described as a marker of positive adaptation during stress and a strength of socioemotional aging. Using data from daily diary (N = 2,022; ages 33-84) and ecological momentary assessment (N = 190; ages 20-80) studies, we evaluate the utility of a common operationalization of co-occurrence, the within-person correlation between positive affect (PA) and negative affect (NA). Then we test competing predictions regarding when co-occurrence will be observed and whether age differences will be present. Results indicate that the correlation is not an informative indicator of co-occurrence. Although correlations were stronger and more negative when stressors occurred (typically interpreted as lower co-occurrence), objective counts of emotion reports indicated that positive and negative emotions were 3 to 4 times more likely to co-occur when stressors were reported. This suggests that co-occurrence reflects the extent to which negative emotions intrude on typically positive emotional states, rather than the extent to which people maintain positive emotions during stress. The variances of both PA and NA increased at stressor reports, indicating that individuals reported a broader not narrower range of emotion during stress. Finally, older age was associated with less variability in NA and a lower likelihood of co-occurring positive and negative emotions. In sum, these findings cast doubt on the utility of the PA-NA correlation as an index of emotional co-occurrence, and question notion that greater emotional co-occurrence represents either a typical or adaptive emotional state in adults. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Age, Stress, and Emotional Complexity: Results from Two Studies of Daily Experiences

    Science.gov (United States)

    Scott, Stacey B.; Sliwinski, Martin J.; Mogle, Jacqueline A.; Almeida, David M.

    2014-01-01

    Experiencing positive and negative emotions together (i.e., co-occurrence) has been described as a marker of positive adaptation during stress and a strength of socio-emotional aging. Using data from daily diary (N=2,022; ages 33-84) and ecological momentary assessment (N=190; ages 20-80) studies, we evaluate the utility of a common operationalization of co-occurrence, the within-person correlation between positive affect (PA) and negative affect (NA). Then we test competing predictions regarding when co-occurrence will be observed and whether age differences will be present. Results indicate that the correlation is not an informative indicator of co-occurrence. Although correlations were stronger and more negative when stressors occurred (typically interpreted as lower co-occurrence), objective counts of emotion reports indicated that positive and negative emotions were more 3 to 4 times likely to co-occur when stressors were reported. This suggests that co-occurrence reflects the extent to which negative emotions intrude on typically positive emotional states, rather than the extent to which people maintain positive emotions during stress. The variances of both PA and NA increased at stressor reports, indicating that individuals reported a broader not narrower range of emotion during stress. Finally, older age was associated with less variability in NA and a lower likelihood of co-occurring positive and negative emotions. In sum, these findings cast doubt on the utility of the PA-NA correlation as an index of emotional co-occurrence, and question notion that greater emotional cooccurrence represents either a typical or adaptive emotional state in adults. PMID:25244477

  17. Latest Results on Complex Plasmas with the PK-3 Plus Laboratory on Board the International Space Station

    Science.gov (United States)

    Schwabe, M.; Du, C.-R.; Huber, P.; Lipaev, A. M.; Molotkov, V. I.; Naumkin, V. N.; Zhdanov, S. K.; Zhukhovitskii, D. I.; Fortov, V. E.; Thomas, H. M.

    2018-03-01

    Complex plasmas are low temperature plasmas that contain microparticles in addition to ions, electrons, and neutral particles. The microparticles acquire high charges, interact with each other and can be considered as model particles for effects in classical condensed matter systems, such as crystallization and fluid dynamics. In contrast to atoms in ordinary systems, their movement can be traced on the most basic level, that of individual particles. In order to avoid disturbances caused by gravity, experiments on complex plasmas are often performed under microgravity conditions. The PK-3 Plus Laboratory was operated on board the International Space Station from 2006 - 2013. Its heart consisted of a capacitively coupled radio-frequency plasma chamber. Microparticles were inserted into the low-temperature plasma, forming large, homogeneous complex plasma clouds. Here, we review the results obtained with recent analyzes of PK-3 Plus data: We study the formation of crystallization fronts, as well as the microparticle motion in, and structure of crystalline complex plasmas. We investigate fluid effects such as wave transmission across an interface, and the development of the energy spectra during the onset of turbulent microparticle movement. We explore how abnormal particles move through, and how macroscopic spheres interact with the microparticle cloud. These examples demonstrate the versatility of the PK-3 Plus Laboratory.

  18. results

    Directory of Open Access Journals (Sweden)

    Salabura Piotr

    2017-01-01

    Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.

  19. Structural plasticity: how intermetallics deform themselves in response to chemical pressure, and the complex structures that result.

    Science.gov (United States)

    Berns, Veronica M; Fredrickson, Daniel C

    2014-10-06

    Interfaces between periodic domains play a crucial role in the properties of metallic materials, as is vividly illustrated by the way in which the familiar malleability of many metals arises from the formation and migration of dislocations. In complex intermetallics, such interfaces can occur as an integral part of the ground-state crystal structure, rather than as defects, resulting in such marvels as the NaCd2 structure (whose giant cubic unit cell contains more than 1000 atoms). However, the sources of the periodic interfaces in intermetallics remain mysterious, unlike the dislocations in simple metals, which can be associated with the exertion of physical stresses. In this Article, we propose and explore the concept of structural plasticity, the hypothesis that interfaces in complex intermetallic structures similarly result from stresses, but ones that are inherent in a defect-free parent structure, rather than being externally applied. Using DFT-chemical pressure analysis, we show how the complex structures of Ca2Ag7 (Yb2Ag7 type), Ca14Cd51 (Gd14Ag51 type), and the 1/1 Tsai-type quasicrystal approximant CaCd6 (YCd6 type) can all be traced to large negative pressures around the Ca atoms of a common progenitor structure, the CaCu5 type with its simple hexagonal 6-atom unit cell. Two structural paths are found by which the compounds provide relief to the Ca atoms' negative pressures: a Ca-rich pathway, where lower coordination numbers are achieved through defects eliminating transition metal (TM) atoms from the structure; and a TM-rich path, along which the addition of spacer Cd atoms provides the Ca coordination environments greater independence from each other as they contract. The common origins of these structures in the presence of stresses within a single parent structure highlights the diverse paths by which intermetallics can cope with competing interactions, and the role that structural plasticity may play in navigating this diversity.

  20. Results from 10 Years of a CBT Pain Self-Management Outpatient Program for Complex Chronic Conditions

    Directory of Open Access Journals (Sweden)

    Kathryn A. Boschen

    2016-01-01

    Full Text Available Background. Traditional unimodal interventions may be insufficient for treating complex pain, as they do not address cognitive and behavioural contributors to pain. Cognitive Behavioural Therapy (CBT and physical exercise (PE are empirically supported treatments that can reduce pain and improve quality of life. Objectives. To examine the outcomes of a pain self-management outpatient program based on CBT and PE at a rehabilitation hospital in Toronto, Ontario. Methods. The pain management group (PMG consisted of 20 sessions over 10 weeks. The intervention consisted of four components: education, cognitive behavioural skills, exercise, and self-management strategies. Outcome measures included the sensory, affective, and intensity of pain experience, depression, anxiety, pain disability, active and passive coping style, and general health functioning. Results. From 2002 to 2011, 36 PMGs were run. In total, 311 patients entered the program and 214 completed it. Paired t-tests showed significant pre- to posttreatment improvements in all outcomes measured. Patient outcomes did not differ according to the number or type of diagnoses. Both before and after treatment, women reported more active coping than men. Discussion. The PMGs improved pain self-management for patients with complex pain. Future research should use a randomized controlled design to better understand the outcomes of PMGs.

  1. The Results of Complex Selective Logging in Beech-Hornbeam Tree Stands of the Greater Caucasus in Azerbaijan

    Directory of Open Access Journals (Sweden)

    A. B. Yakhyaev

    2014-06-01

    Full Text Available The results of complex selective logging conducted in beech-hornbeam tree stands on the northeastern slope of the Greater Caucasus are analyzed in the paper. Experiments were carried out in two forestry districts, involving beech stands, comprising 2–3 units, with 30° slopes, in beech forests with woodruff, fescue and forb forest types. It has been revealed that for recovering the main tree species, as well as for increasing productivity and sustainability of the beech-hornbeam tree stands, which was spread out in the northern exposures, 2–3 repetitions of complex selective logging are recommended. It is recommended that in order to increase the amount of beech in the tree stand composition to 6–8 units in young stands and to 4–6 units at the slopes of south exposures, to complete 3–4 thinning operations, with the increasing beech share to 4–5 units in the upper story and in the undergrowth.

  2. [Intramedullary stabilisation of displaced midshaft clavicular fractures: does the fracture pattern (simple vs. complex) influence the anatomic and functional result].

    Science.gov (United States)

    Langenhan, R; Reimers, N; Probst, A

    2014-12-01

    Displaced midshaft clavicular fractures are often treated operatively. The most common way of treatment is plating. Elastic stable intramedullary nailing (ESIN) is an alternative, but seldom used. Studies showed comparable or even better results for intramedullary nailing than for plating in simple 2- or 3-fragment midshaft fractures. The indication of ESIN for multifragmentary clavicular fractures is discussed critically in the literature because of reduced primary stability and danger of secondary shortening. Until now only few studies report functional results after fracture healing depending on the fracture type. To the best of our knowledge there is no study showing significantly worse functional scores for ESIN in complex displaced midshaft fractures. The objective of this study was to examine anatomic and functional results of simple (2 or 3 fragments, OTA type 15B1 and 15B2) and complex (multifragmentary, OTA type 15B3) displaced midshaft clavicula fractures after internal fixation. Between 2009 and 2012, 40 patients (female/male 10/30; mean age 33 [16-60] years) with closed displaced midshaft clavicular fractures were treated by open reduction and ESIN (Titanium Elastic Nail [TEN], Synthes, Umkirch, Germany). Thirty-seven patients were retrospectively analysed after a mean of 27 (12-43) months. Twenty patients (group A) had simple fractures (OTA type 15B1 and 15B2), 17 patients (group B) had complex fractures (OTA type 15B3). All shoulder joints were postoperatively treated functionally for six weeks without weight limited to 90° abduction/flexion. Both groups were comparable in gender, age, body mass index, months until metal removal, number of physiotherapy sessions and time until follow-up examination. Joint function (neutral zero method) and strength (standing patient with arm in 90° abduction, holding 1-12 kg for 5 sec) in both shoulders were documented. The distance between the centre of the jugulum and the lateral acromial border was measured for

  3. Estimation of global daily irradiation in complex topography zones using digital elevation models and meteosat images: Comparison of the results

    International Nuclear Information System (INIS)

    Martinez-Durban, M.; Zarzalejo, L.F.; Bosch, J.L.; Rosiek, S.; Polo, J.; Batlles, F.J.

    2009-01-01

    The knowledge of the solar irradiation in a certain place is fundamental for the suitable location of solar systems, both thermal and photovoltaic. On the local scale, the topography is the most important modulating factor of the solar irradiation on the surface. In this work the global daily irradiation is estimated concerning various sky conditions, in zones of complex topography. In order to estimate the global daily irradiation we use a methodology based on a Digital Terrain Model (DTM), on one hand making use of pyranometer measurements and on the other hand utilizing satellite images. We underline that DTM application employing pyranometer measurements produces better results than estimation using satellite images, though accuracy of the same order is obtained in both cases for Root Mean Square Error (RMSE) and Mean Bias Error (MBE).

  4. Estimation of global daily irradiation in complex topography zones using digital elevation models and meteosat images: Comparison of the results

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Durban, M. [Dpto. de Lenguajes y Computacion, Universidad de Almeria, 04120 Almeria (Spain); Zarzalejo, L.F.; Polo, J. [Dpto. de Energia, CIEMAT, 28040 Madrid (Spain); Bosch, J.L.; Rosiek, S.; Batlles, F.J. [Dpto. Fisica Aplicada, Universidad de Almeria, 04120 Almeria (Spain)

    2009-09-15

    The knowledge of the solar irradiation in a certain place is fundamental for the suitable location of solar systems, both thermal and photovoltaic. On the local scale, the topography is the most important modulating factor of the solar irradiation on the surface. In this work the global daily irradiation is estimated concerning various sky conditions, in zones of complex topography. In order to estimate the global daily irradiation we use a methodology based on a Digital Terrain Model (DTM), on one hand making use of pyranometer measurements and on the other hand utilizing satellite images. We underline that DTM application employing pyranometer measurements produces better results than estimation using satellite images, though accuracy of the same order is obtained in both cases for Root Mean Square Error (RMSE) and Mean Bias Error (MBE). (author)

  5. Influence of FGR complexity modelling on the practical results in gas pressure calculation of selected fuel elements from Dukovany NPP

    International Nuclear Information System (INIS)

    Lahodova, M.

    2001-01-01

    A modernization fuel system and advanced fuel for operation up to the high burnup are used in present time in Dukovany NPP. Reloading of the cores are evaluated using computer codes for thermomechanical behavior of the most loaded fuel rods. The paper presents results of parametric calculations performed by the NRI Rez integral code PIN, version 2000 (PIN2k) to assess influence of fission gas release modelling complexity on achieved results. The representative Dukovany NPP fuel rod irradiation history data are used and two cases of fuel parameter variables (soft and hard) are chosen for the comparison. Involved FGR models where the GASREL diffusion model developed in the NRI Rez plc and standard Weisman model that is recommended in the previous version of the PIN integral code. FGR calculation by PIN2k with GASREL model represents more realistic results than standard Weisman's model. Results for linear power, fuel centre temperature, FGR and gas pressure versus burnup are given for two fuel rods

  6. An area-efficient topology for VLSI implementation of Viterbi decoders and other shuffle-exchange type structures

    DEFF Research Database (Denmark)

    Sparsø, Jens; Jørgensen, Henrik Nordtorp; Paaske, Erik

    1991-01-01

    A topology for single-chip implementation of computing structures based on shuffle-exchange (SE)-type interconnection networks is presented. The topology is suited for structures with a small number of processing elements (i.e. 32-128) whose area cannot be neglected compared to the area required....... The topology has been used in a VLSI implementation of the add-compare-select (ACS) module of a fully parallel K=7, R=1/2 Viterbi decoder. Both the floor-planning issues and some of the important algorithm and circuit-level aspects of this design are discussed. The chip has been designed and fabricated in a 2....... The interconnection network occupies 32% of the area.>...

  7. 10 K gate I(2)L and 1 K component analog compatible bipolar VLSI technology - HIT-2

    Science.gov (United States)

    Washio, K.; Watanabe, T.; Okabe, T.; Horie, N.

    1985-02-01

    An advanced analog/digital bipolar VLSI technology that combines on the same chip 2-ns 10 K I(2)L gates with 1 K analog devices is proposed. The new technology, called high-density integration technology-2, is based on a new structure concept that consists of three major techniques: shallow grooved-isolation, I(2)L active layer etching, and I(2)L current gain increase. I(2)L circuits with 80-MHz maximum toggle frequency have developed compatibly with n-p-n transistors having a BV(CE0) of more than 10 V and an f(T) of 5 GHz, and lateral p-n-p transistors having an f(T) of 150 MHz.

  8. Macrocell Builder: IP-Block-Based Design Environment for High-Throughput VLSI Dedicated Digital Signal Processing Systems

    Directory of Open Access Journals (Sweden)

    Urard Pascal

    2006-01-01

    Full Text Available We propose an efficient IP-block-based design environment for high-throughput VLSI systems. The flow generates SystemC register-transfer-level (RTL architecture, starting from a Matlab functional model described as a netlist of functional IP. The refinement model inserts automatically control structures to manage delays induced by the use of RTL IPs. It also inserts a control structure to coordinate the execution of parallel clocked IP. The delays may be managed by registers or by counters included in the control structure. The flow has been used successfully in three real-world DSP systems. The experimentations show that the approach can produce efficient RTL architecture and allows to save huge amount of time.

  9. VLSI Implementation of Hybrid Wave-Pipelined 2D DWT Using Lifting Scheme

    Directory of Open Access Journals (Sweden)

    G. Seetharaman

    2008-01-01

    Full Text Available A novel approach is proposed in this paper for the implementation of 2D DWT using hybrid wave-pipelining (WP. A digital circuit may be operated at a higher frequency by using either pipelining or WP. Pipelining requires additional registers and it results in more area, power dissipation and clock routing complexity. Wave-pipelining does not have any of these disadvantages but requires complex trial and error procedure for tuning the clock period and clock skew between input and output registers. In this paper, a hybrid scheme is proposed to get the benefits of both pipelining and WP techniques. In this paper, two automation schemes are proposed for the implementation of 2D DWT using hybrid WP on both Xilinx, San Jose, CA, USA and Altera FPGAs. In the first scheme, Built-in self-test (BIST approach is used to choose the clock skew and clock period for I/O registers between the wave-pipelined blocks. In the second approach, an on-chip soft-core processor is used to choose the clock skew and clock period. The results for the hybrid WP are compared with nonpipelined and pipelined approaches. From the implementation results, the hybrid WP scheme requires the same area but faster than the nonpipelined scheme by a factor of 1.25–1.39. The pipelined scheme is faster than the hybrid scheme by a factor of 1.15–1.39 at the cost of an increase in the number of registers by a factor of 1.78–2.73, increase in the number of LEs by a factor of 1.11–1.32 and it increases the clock routing complexity.

  10. Results of clinical approbation of new local treatment method in the complex therapy of inflammatory parodontium diseases

    Directory of Open Access Journals (Sweden)

    Yu. G. Romanova

    2017-08-01

    Full Text Available Treatment and prevention of inflammatory diseases of parodontium are one of the most difficult problems in stomatology today. Purpose of research: estimation of clinical efficiency of local combined application of developed agent apigel for oral cavity care and low-frequency electromagnetic field magnetotherapy at treatment of inflammatory diseases of parodontium. Materials and methods: 46 patients with chronic generalized catarrhal gingivitis and chronic generalized periodontitis of 1st degree were included into the study. Patients were divided into 2 groups depending on treatment management: basic (n = 23 and control (n = 23. Conventional treatment with the local use of the dental gel with camomile was used in the control group. Patients of the basic group were treated with local combined application of apigel and magnetotherapy. Efficiency was estimated with clinical, laboratory, microbiological and functional (ultrasonic Doppler examination methods of examination. Results: The application of the apigel and pulsating electromagnetic field in the complex medical treatment of patients with chronic generalized periodontitis (GhGP caused positive changes in clinical symptom and condition of parodontal tissues, that was accompanied by decline of hygienic and parodontal indexes. As compared with patients who had traditional anti-inflammatory therapy, patients who were treated with local application of apigel and magnetoterapy had decline of edema incidence. It was revealed that decrease of the pain correlated with improvement of hygienic condition of oral cavity and promoted prevention of bacterial contamination of damaged mucous membranes. Estimation of microvasculatory blood stream with the method of ultrasonic doppler flowmetry revealed more rapid normalization of volume and linear high systole, speed of blood stream in the parodontal tissues in case of use of new complex local method. Conclusions: Effect of the developed local agent in patients

  11. Deletion of flbA results in increased secretome complexity and reduced secretion heterogeneity in colonies of Aspergillus niger.

    Science.gov (United States)

    Krijgsheld, Pauline; Nitsche, Benjamin M; Post, Harm; Levin, Ana M; Müller, Wally H; Heck, Albert J R; Ram, Arthur F J; Altelaar, A F Maarten; Wösten, Han A B

    2013-04-05

    Aspergillus niger is a cell factory for the production of enzymes. This fungus secretes proteins in the central part and at the periphery of the colony. The sporulating zone of the colony overlapped with the nonsecreting subperipheral zone, indicating that sporulation inhibits protein secretion. Indeed, strain ΔflbA that is affected early in the sporulation program secreted proteins throughout the colony. In contrast, the ΔbrlA strain that initiates but not completes sporulation did not show altered spatial secretion. The secretome of 5 concentric zones of xylose-grown ΔflbA colonies was assessed by quantitative proteomics. In total 138 proteins with a signal sequence for secretion were identified in the medium of ΔflbA colonies. Of these, 18 proteins had never been reported to be part of the secretome of A. niger, while 101 proteins had previously not been identified in the culture medium of xylose-grown wild type colonies. Taken together, inactivation of flbA results in spatial changes in secretion and in a more complex secretome. The latter may be explained by the fact that strain ΔflbA has a thinner cell wall compared to the wild type, enabling efficient release of proteins. These results are of interest to improve A. niger as a cell factory.

  12. In Vitro Interactions between 17β-Estradiol and DNA Result in Formation of the Hormone-DNA Complexes

    Directory of Open Access Journals (Sweden)

    Zbynek Heger

    2014-07-01

    Full Text Available Beyond the role of 17β-estradiol (E2 in reproduction and during the menstrual cycle, it has been shown to modulate numerous physiological processes such as cell proliferation, apoptosis, inflammation and ion transport in many tissues. The pathways in which estrogens affect an organism have been partially described, although many questions still exist regarding estrogens’ interaction with biomacromolecules. Hence, the present study showed the interaction of four oligonucleotides (17, 20, 24 and/or 38-mer with E2. The strength of these interactions was evaluated using optical methods, showing that the interaction is influenced by three major factors, namely: oligonucleotide length, E2 concentration and interaction time. In addition, the denaturation phenomenon of DNA revealed that the binding of E2 leads to destabilization of hydrogen bonds between the nitrogenous bases of DNA strands resulting in a decrease of their melting temperatures (Tm. To obtain a more detailed insight into these interactions, MALDI-TOF mass spectrometry was employed. This study revealed that E2 with DNA forms non-covalent physical complexes, observed as the mass shifts for app. 270 Da (Mr of E2 to higher molecular masses. Taken together, our results indicate that E2 can affect biomacromolecules, as circulating oligonucleotides, which can trigger mutations, leading to various unwanted effects.

  13. Targeting Complex Sentences in Older School Children with Specific Language Impairment: Results from an Early-Phase Treatment Study

    Science.gov (United States)

    Balthazar, Catherine H.; Scott, Cheryl M.

    2018-01-01

    Purpose: This study investigated the effects of a complex sentence treatment at 2 dosage levels on language performance of 30 school-age children ages 10-14 years with specific language impairment. Method: Three types of complex sentences (adverbial, object complement, relative) were taught in sequence in once or twice weekly dosage conditions.…

  14. CR2-mediated activation of the complement alternative pathway results in formation of membrane attack complexes on human B lymphocytes

    DEFF Research Database (Denmark)

    Nielsen, C H; Marquart, H V; Prodinger, W M

    2001-01-01

    of the CR1 binding site with the monoclonal antibody 3D9 also resulted in a minor reduction in MAC deposition, while FE8 and 3D9, in combination, markedly reduced deposition of both C3 fragments (91 +/- 5%) and C9 (95 +/- 3%). The kinetics of C3-fragment and MAC deposition, as well as the dependence of both......Normal human B lymphocytes activate the alternative pathway of complement via complement receptor type 2 (CR2, CD21), that binds hydrolysed C3 (iC3) and thereby promotes the formation of a membrane-bound C3 convertase. We have investigated whether this might lead to the generation of a C5...... convertase and consequent formation of membrane attack complexes (MAC). Deposition of C3 fragments and MAC was assessed on human peripheral B lymphocytes in the presence of 30% autologous serum containing 4.4 mM MgCl2/20 mM EGTA, which abrogates the classical pathway of complement without affecting...

  15. Acquisition War-Gaming Technique for Acquiring Future Complex Systems: Modeling and Simulation Results for Cost Plus Incentive Fee Contract

    Directory of Open Access Journals (Sweden)

    Tien M. Nguyen

    2018-03-01

    Full Text Available This paper provides a high-level discussion and propositions of frameworks and models for acquisition strategy of complex systems. In particular, it presents an innovative system engineering approach to model the Department of Defense (DoD acquisition process and offers several optimization modules including simulation models using game theory and war-gaming concepts. Our frameworks employ Advanced Game-based Mathematical Framework (AGMF and Unified Game-based Acquisition Framework (UGAF, and related advanced simulation and mathematical models that include a set of War-Gaming Engines (WGEs implemented in MATLAB statistical optimization models. WGEs are defined as a set of algorithms, characterizing the Program and Technical Baseline (PTB, technology enablers, architectural solutions, contract type, contract parameters and associated incentives, and industry bidding position. As a proof of concept, Aerospace, in collaboration with the North Carolina State University (NCSU and University of Hawaii (UH, successfully applied and extended the proposed frameworks and decision models to determine the optimum contract parameters and incentives for a Cost Plus Incentive Fee (CPIF contract. As a result, we can suggest a set of acquisition strategies that ensure the optimization of the PTB.

  16. THE RESULTS OF THE DEFECT PLACES INVESTIGATION OF DONETSK RAILWAY ROAD BED BY GROUND PENETRATING RADAR COMPLEX

    Directory of Open Access Journals (Sweden)

    V. D. Petrenko

    2014-10-01

    Full Text Available Purpose. Defective places definition of road bed at ground penetrating radar is examined. Methodology. For achievement of this goal the experimental research on ground penetrating radar inspection of road bed defective places of the Donetsk Railway, which are caused by a complex of various reasons of geotechnical and constructive character, were conducted. Findings. According to these diagnostic results of road bed on the three districts of the Donetsk Railway is revealed the main causes which lead to the defects appearance, deformities and injuries in it, there is abuse of process parameters and modify its physic mechanical soil properties of natural and technology-related factors. As it is established, the use of ground penetrating radar of series “Losa” on the railways of Ukraine allows searching ballast tank in the body of road bed, defining damp places in soil road bed and foundations, to find arrangement of foreign matter in the soil road bed and work search heterogeneity and places weakening soil. In addition, the use of ground penetrating radar provides rapid detection of defects, deformation and damage of railway track, especially in areas the most dangerous for rolling stock that creates the high level security at the main and auxiliary lines of Ukrzaliznytsia. In conducting the research was justified the high level of reliability and performance with autonomous use of ground penetrating radar. Originality. In modern conditions of defects determination, deformations and damages by traditional methods with application of engineering-geological investigations, it is impossible in connection with their insufficient efficiency. Therefore the using of highly effective methodology of expeditious tool identification of defective places allows reducing significantly the periods of repair of a railway track which is very important for introduction of the high-speed movement on the Ukrainian Railways. Practical value. On the basis of the

  17. Clinical experience with arthroscopically-assisted repair of peripheral triangular fibrocartilage complex tears in adolescents--technique and results.

    Science.gov (United States)

    Farr, Sebastian; Zechmann, Ulrike; Ganger, Rudolf; Girsch, Werner

    2015-08-01

    The purpose of this study was to report our preliminary results after arthroscopically-assisted repair of peripheral triangular fibrocartilage complex (TFCC) tears in adolescent patients. All children and adolescents who underwent arthroscopically-assisted repair of a Palmer 1B tear were identified and prospectively evaluated after a mean follow-up of 1.3 years. The postoperative assessment included documentation of clinical parameters, pain score (visual analogue scale, VAS), grip strength and completion of validated outcome scores (Modified Mayo Wrist Score, MMWS; Disabilities of the Arm, Shoulder and Hand Inventory, DASH). A total of 12 patients (four males, eight females) with a mean age of 16.3 years at the time of surgery were evaluated. The mean VAS decreased significantly from 7.0 to 1.7 after the procedure. We observed a significant increase of the MMWS after surgery; however, MMWS was still significantly lower at final follow-up when compared to the contralateral side. A mean postoperative DASH score of 16 indicated an excellent outcome after the procedure. DASH Sports and Work Modules showed fair and good overall outcomes in the short-term, respectively. Grip strength averaged 86 % of the contralateral side at final follow-up, with no significant difference being found between both sides. Arthroscopically-assisted repair of peripheral TFCC tears in adolescents provided predictable pain relief and markedly improved functional outcome scores. Concomitant pathologies may have to be addressed at the same time to eventually achieve a satisfactory outcome. Sports participation, however, may be compromised in the short-term and should therefore be resumed six months postoperatively.

  18. Detection of irradiated constituents in processed food with complex lipid matrices. Results of a research project of Baden-Wuerttemberg

    International Nuclear Information System (INIS)

    Hartmann, M.; Ammon, J.; Berg, H.

    1999-01-01

    The detection of irradiated constituents in processed food with a complex lipid matrix can be adversely affected by two conditions. The small amounts of radiation-induced hydrocarbons are diluted by the fat matrix of the food, or there are substances accompanying the lipids in the matrix and thus make the analysis more difficult. In those cases, sample preparation alone by means of Florisil SPE (solid-phase extraction) is not enough and requires additional, subsequent SPE argentation chromatography of the Florisil eluate, as this latter analytical method permits reliable detection down to very small amounts of irradiated, fat-containing constituents even in a complex lipid matrix. SPE-Florisil/argentation chromatography detects and selects radiation-induced hydrocarbons in a complex lipid matrix, so that detection of irradiation at even very low doses down to 0.025 kGy is possible. The method described is highly sensitive, inexpensive, and easy to apply. It efficiently substitutes such complex preparation or measuring methods as SFE-GC/MS or LC-GC/MS. This highly sensitive testing method for detection of food irradiation can be carried out in almost any analytical laboratory. (orig./CB) [de

  19. Leigh syndrome associated with a deficiency of the pyruvate dehydrogenase complex: results of treatment with a ketogenic diet

    NARCIS (Netherlands)

    Wijburg, F. A.; Barth, P. G.; Bindoff, L. A.; Birch-Machin, M. A.; van der Blij, J. F.; Ruitenbeek, W.; TURNBULL, D. M.; Schutgens, R. B.

    1992-01-01

    A one-year-old boy suffering from intermittent lactic acidosis, muscular hypotonia, horizontal gaze paralysis and spasticity in both legs had low activity of the pyruvate dehydrogenase complex associated with low amounts of immunoreactive E 1 alpha and E 1 beta. Leigh syndrome was diagnosed on the

  20. Detection and isolation of cell-derived microparticles are compromised by protein complexes resulting from shared biophysical parameters.

    Science.gov (United States)

    György, Bence; Módos, Károly; Pállinger, Eva; Pálóczi, Krisztina; Pásztói, Mária; Misják, Petra; Deli, Mária A; Sipos, Aron; Szalai, Anikó; Voszka, István; Polgár, Anna; Tóth, Kálmán; Csete, Mária; Nagy, György; Gay, Steffen; Falus, András; Kittel, Agnes; Buzás, Edit I

    2011-01-27

    Numerous diseases, recently reported to associate with elevated microvesicle/microparticle (MP) counts, have also long been known to be characterized by accelerated immune complex (IC) formation. The goal of this study was to investigate the potential overlap between parameters of protein complexes (eg, ICs or avidin-biotin complexes) and MPs, which might perturb detection and/or isolation of MPs. In this work, after comprehensive characterization of MPs by electron microscopy, atomic force microscopy, dynamic light-scattering analysis, and flow cytometry, for the first time, we drive attention to the fact that protein complexes, especially insoluble ICs, overlap in biophysical properties (size, light scattering, and sedimentation) with MPs. This, in turn, affects MP quantification by flow cytometry and purification by differential centrifugation, especially in diseases in which IC formation is common, including not only autoimmune diseases, but also hematologic disorders, infections, and cancer. These data may necessitate reevaluation of certain published data on patient-derived MPs and contribute to correct the clinical laboratory assessment of the presence and biologic functions of MPs in health and disease.

  1. Corrective Measures Study Modeling Results for the Southwest Plume - Burial Ground Complex/Mixed Waste Management Facility

    International Nuclear Information System (INIS)

    Harris, M.K.

    1999-01-01

    Groundwater modeling scenarios were performed to support the Corrective Measures Study and Interim Action Plan for the southwest plume of the Burial Ground Complex/Mixed Waste Management Facility. The modeling scenarios were designed to provide data for an economic analysis of alternatives, and subsequently evaluate the effectiveness of the selected remedial technologies for tritium reduction to Fourmile Branch. Modeling scenarios assessed include no action, vertical barriers, pump, treat, and reinject; and vertical recirculation wells

  2. VLSI systems energy management from a software perspective – A literature survey

    Directory of Open Access Journals (Sweden)

    Prasada Kumari K.S.

    2016-09-01

    Full Text Available The increasing demand for ultra-low power electronic systems has motivated research in device technology and hardware design techniques. Experimental studies have proved that the hardware innovations for power reduction are fully exploited only with the proper design of upper layer software. Also, the software power and energy modelling and analysis – the first step towards energy reduction is complex due to the inter and intra dependencies of processors, operating systems, application software, programming languages and compilers. The subject is too vast; the paper aims to give a consolidated view to researchers in arriving at solutions to power optimization problems from a software perspective. The review emphasizes the fact that software design and implementation is to be viewed from system energy conservation angle rather than as an isolated process. After covering a global view of end to end software based power reduction techniques for micro sensor nodes to High Performance Computing systems, specific design aspects related to battery powered Embedded computing for mobile and portable systems are addressed in detail. The findings are consolidated into 2 major categories – those related to research directions and those related to existing industry practices. The emerging concept of Green Software with specific focus on mainframe computing is also discussed in brief. Empirical results on power saving are included wherever available. The paper concludes that only with the close co-ordination between hardware architect, software architect and system architect low energy systems can be realized.

  3. An oceanic core complex (OCC) in the Albanian Dinarides? Preliminary paleomagnetic and structural results from the Mirdita Ophiolite (northern Albania)

    Science.gov (United States)

    Maffione, M.; Morris, A.; Anderson, M.

    2010-12-01

    Oceanic core complexes (OCCs) are dome-shaped massifs commonly associated with the inside corners of the intersection of transform faults and slow (and ultra-slow) spreading centres. They represent the uplifted footwalls of large-slip oceanic detachment faults (e.g. Cann et al., 1997; Blackman et al., 1998) and are composed of mantle and lower crustal rocks exhumed during fault displacement (Smith et al., 2006, 2008). Recent paleomagnetic studies of core samples from OCCs in the Atlantic Ocean (Morris et al., 2009; MacLeod et al., in prep) have confirmed that footwall sections undergo substantial rotation around (sub-) horizontal axes. These studies, therefore, support “rolling hinge” models for the evolution of OCCs, whereby oceanic detachment faults initiate at a steep angle at depth and then “roll-over” to their present day low angle orientations during unroofing (Buck, 1988; Wernicke & Axen, 1988; Lavier et al., 1999). However, a fully integrated paleomagnetic and structural analysis of this process is hampered by the one-dimensional sampling provided by ocean drilling of OCC footwalls. Therefore, ancient analogues for OCCs in ophiolites are of great interest, as these potentially provide 3-D exposures of these important structures and hence a more complete understanding of footwall strain and kinematics (providing that emplacement-related phases of deformation can be accounted for). Recently, the relationship between outcropping crustal and upper mantle rocks led Tremblay et al. (2009) to propose that an OCC is preserved within the Mirdita ophiolite of the Albanian Dinarides (northern Albania). This is a slice of Jurassic oceanic lithosphere exposed along a N-S corridor which escaped the main late Cenozoic Alpine deformation (Robertson, 2002, 2004; Dilek et al., 2007). Though in the eastern portion of the Mirdita ophiolite a Penrose-type sequence is present, in the western portion mantle rocks are in tectonic contact with upper crustal lithologies

  4. Treatment of complex internal carotid artery aneurysms using radial artery grafts. Surgical technique, perioperative complications, and results in 17 patients

    International Nuclear Information System (INIS)

    Murai, Yasuo; Teramoto, Akira; Mizunari, Takayuki; Kobayashi, Shiro; Kamiyama, Hiroyasu

    2007-01-01

    Complex giant or large internal carotid artery aneurysms present a surgical challenge because limitations and difficulty are encountered with either clipping or endovascular treatment. Our review of previous reports suggests that no current vascular assessment can accurately predict the occurrence of ischemic complications after internal carotid artery ligation. The present study concerns surgical technique, complications, and clinical outcome of radial artery grafting followed by parent artery trapping or proximal occlusion for management of these difficult lesions. Between September 1997 and October 2005, we performed radial artery grafting followed immediately by parent artery occlusion in 17 patients with giant or large complex intracranial carotid aneurysms (3 men, 14 women; mean follow-up duration, 62 months). All patients underwent postoperative digital subtraction angiography to assess graft patency and aneurysm obliteration. All 17 aneurysms were excluded from the cerebral circulation, with all radial artery grafts patent. Among 4 patients with cranial nerve disturbances, dysfunction was temporary in 5; in the others, oculomotor nerve paresis persisted. No perioperative cerebral infarction occurred. Sensory aphasia reflecting cerebral contusions caused by temporal lobe retraction resolved within 2 months, as did hemiparesis from a postoperative epidural hematoma. With appropriate attention to surgical technique, radial artery grafting followed by acute parent artery occlusion is a safe treatment for complex internal carotid artery aneurysms. Graft patency and aneurysm thrombosis were achieved in all patients. Cranial nerve dysfunction (III, VI) caused by altered blood flow from the internal carotid artery after occlusion was the most common complication and typically was temporary. In our experience with these difficult aneurysms, not only clipping but also reconstruction of the internal carotid artery was required, especially for wide-necked symptomatic

  5. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    OpenAIRE

    Giacomo Indiveri

    2008-01-01

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computation...

  6. Developing software to "track and catch" missed follow-up of abnormal test results in a complex sociotechnical environment.

    Science.gov (United States)

    Smith, M; Murphy, D; Laxmisan, A; Sittig, D; Reis, B; Esquivel, A; Singh, H

    2013-01-01

    Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider's prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA's EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility's "test" EHR system, thus demonstrating technical compatibility. To address the factors involved in missed test results, we developed a software prototype to account for

  7. Long-Term Use of Everolimus in Patients with Tuberous Sclerosis Complex: Final Results from the EXIST-1 Study.

    Directory of Open Access Journals (Sweden)

    David N Franz

    Full Text Available Everolimus, a mammalian target of rapamycin (mTOR inhibitor, has demonstrated efficacy in treating subependymal giant cell astrocytomas (SEGAs and other manifestations of tuberous sclerosis complex (TSC. However, long-term use of mTOR inhibitors might be necessary. This analysis explored long-term efficacy and safety of everolimus from the conclusion of the EXIST-1 study (NCT00789828.EXIST-1 was an international, prospective, double-blind, placebo-controlled phase 3 trial examining everolimus in patients with new or growing TSC-related SEGA. After a double-blind core phase, all remaining patients could receive everolimus in a long-term, open-label extension. Everolimus was initiated at a dose (4.5 mg/m2/day titrated to a target blood trough of 5-15 ng/mL. SEGA response rate (primary end point was defined as the proportion of patients achieving confirmed ≥50% reduction in the sum volume of target SEGA lesions from baseline in the absence of worsening nontarget SEGA lesions, new target SEGA lesions, and new or worsening hydrocephalus. Of 111 patients (median age, 9.5 years who received ≥1 dose of everolimus (median duration, 47.1 months, 57.7% (95% confidence interval [CI], 47.9-67.0 achieved SEGA response. Of 41 patients with target renal angiomyolipomas at baseline, 30 (73.2% achieved renal angiomyolipoma response. In 105 patients with ≥1 skin lesion at baseline, skin lesion response rate was 58.1%. Incidence of adverse events (AEs was comparable with that of previous reports, and occurrence of emergent AEs generally decreased over time. The most common AEs (≥30% incidence suspected to be treatment-related were stomatitis (43.2% and mouth ulceration (32.4%.Everolimus use led to sustained reduction in tumor volume, and new responses were observed for SEGA and renal angiomyolipoma from the blinded core phase of the study. These findings support the hypothesis that everolimus can safely reverse multisystem manifestations of TSC in a

  8. Barriers to and facilitators of implementing complex workplace dietary interventions: process evaluation results of a cluster controlled trial.

    Science.gov (United States)

    Fitzgerald, Sarah; Geaney, Fiona; Kelly, Clare; McHugh, Sheena; Perry, Ivan J

    2016-04-21

    Ambiguity exists regarding the effectiveness of workplace dietary interventions. Rigorous process evaluation is vital to understand this uncertainty. This study was conducted as part of the Food Choice at Work trial which assessed the comparative effectiveness of a workplace environmental dietary modification intervention and an educational intervention both alone and in combination versus a control workplace. Effectiveness was assessed in terms of employees' dietary intakes, nutrition knowledge and health status in four large manufacturing workplaces. The study aimed to examine barriers to and facilitators of implementing complex workplace interventions, from the perspectives of key workplace stakeholders and researchers involved in implementation. A detailed process evaluation monitored and evaluated intervention implementation. Interviews were conducted at baseline (27 interviews) and at 7-9 month follow-up (27 interviews) with a purposive sample of workplace stakeholders (managers and participating employees). Topic guides explored factors which facilitated or impeded implementation. Researchers involved in recruitment and data collection participated in focus groups at baseline and at 7-9 month follow-up to explore their perceptions of intervention implementation. Data were imported into NVivo software and analysed using a thematic framework approach. Four major themes emerged; perceived benefits of participation, negotiation and flexibility of the implementation team, viability and intensity of interventions and workplace structures and cultures. The latter three themes either positively or negatively affected implementation, depending on context. The implementation team included managers involved in coordinating and delivering the interventions and the researchers who collected data and delivered intervention elements. Stakeholders' perceptions of the benefits of participating, which facilitated implementation, included managers' desire to improve company

  9. Qualitative and Quantitative Detection of Botulinum Neurotoxins from Complex Matrices: Results of the First International Proficiency Test

    Directory of Open Access Journals (Sweden)

    Sylvia Worbs

    2015-11-01

    Full Text Available In the framework of the EU project EQuATox, a first international proficiency test (PT on the detection and quantification of botulinum neurotoxins (BoNT was conducted. Sample materials included BoNT serotypes A, B and E spiked into buffer, milk, meat extract and serum. Different methods were applied by the participants combining different principles of detection, identification and quantification. Based on qualitative assays, 95% of all results reported were correct. Successful strategies for BoNT detection were based on a combination of complementary immunological, MS-based and functional methods or on suitable functional in vivo/in vitro approaches (mouse bioassay, hemidiaphragm assay and Endopep-MS assay. Quantification of BoNT/A, BoNT/B and BoNT/E was performed by 48% of participating laboratories. It turned out that precise quantification of BoNT was difficult, resulting in a substantial scatter of quantitative data. This was especially true for results obtained by the mouse bioassay which is currently considered as “gold standard” for BoNT detection. The results clearly demonstrate the urgent need for certified BoNT reference materials and the development of methods replacing animal testing. In this context, the BoNT PT provided the valuable information that both the Endopep-MS assay and the hemidiaphragm assay delivered quantitative results superior to the mouse bioassay.

  10. Improvement of the prediction of fluid pressure from the results of techno-geophysical studies under complex geological conditions

    Energy Technology Data Exchange (ETDEWEB)

    Aleksandrov, B.L.; Esipko, O.A.; Dakhkilgov, T.D.

    1981-12-01

    Results of statistical processing of the data of prediction of pore pressures in the course of well sinking, according to the material of oil field and geophysical investigations in different areas, are presented. Likewise, the errors of pressure prediction, their causes, geological models of series with anomalously high formation pressure, and methods for prediction of pore and formation pressures under different geological conditions are considered. 12 refs.

  11. Impact of Cognitive Abilities and Prior Knowledge on Complex Problem Solving Performance – Empirical Results and a Plea for Ecologically Valid Microworlds

    Directory of Open Access Journals (Sweden)

    Heinz-Martin Süß

    2018-05-01

    Full Text Available The original aim of complex problem solving (CPS research was to bring the cognitive demands of complex real-life problems into the lab in order to investigate problem solving behavior and performance under controlled conditions. Up until now, the validity of psychometric intelligence constructs has been scrutinized with regard to its importance for CPS performance. At the same time, different CPS measurement approaches competing for the title of the best way to assess CPS have been developed. In the first part of the paper, we investigate the predictability of CPS performance on the basis of the Berlin Intelligence Structure Model and Cattell’s investment theory as well as an elaborated knowledge taxonomy. In the first study, 137 students managed a simulated shirt factory (Tailorshop; i.e., a complex real life-oriented system twice, while in the second study, 152 students completed a forestry scenario (FSYS; i.e., a complex artificial world system. The results indicate that reasoning – specifically numerical reasoning (Studies 1 and 2 and figural reasoning (Study 2 – are the only relevant predictors among the intelligence constructs. We discuss the results with reference to the Brunswik symmetry principle. Path models suggest that reasoning and prior knowledge influence problem solving performance in the Tailorshop scenario mainly indirectly. In addition, different types of system-specific knowledge independently contribute to predicting CPS performance. The results of Study 2 indicate that working memory capacity, assessed as an additional predictor, has no incremental validity beyond reasoning. We conclude that (1 cognitive abilities and prior knowledge are substantial predictors of CPS performance, and (2 in contrast to former and recent interpretations, there is insufficient evidence to consider CPS a unique ability construct. In the second part of the paper, we discuss our results in light of recent CPS research, which predominantly

  12. Impact of Cognitive Abilities and Prior Knowledge on Complex Problem Solving Performance – Empirical Results and a Plea for Ecologically Valid Microworlds

    Science.gov (United States)

    Süß, Heinz-Martin; Kretzschmar, André

    2018-01-01

    The original aim of complex problem solving (CPS) research was to bring the cognitive demands of complex real-life problems into the lab in order to investigate problem solving behavior and performance under controlled conditions. Up until now, the validity of psychometric intelligence constructs has been scrutinized with regard to its importance for CPS performance. At the same time, different CPS measurement approaches competing for the title of the best way to assess CPS have been developed. In the first part of the paper, we investigate the predictability of CPS performance on the basis of the Berlin Intelligence Structure Model and Cattell’s investment theory as well as an elaborated knowledge taxonomy. In the first study, 137 students managed a simulated shirt factory (Tailorshop; i.e., a complex real life-oriented system) twice, while in the second study, 152 students completed a forestry scenario (FSYS; i.e., a complex artificial world system). The results indicate that reasoning – specifically numerical reasoning (Studies 1 and 2) and figural reasoning (Study 2) – are the only relevant predictors among the intelligence constructs. We discuss the results with reference to the Brunswik symmetry principle. Path models suggest that reasoning and prior knowledge influence problem solving performance in the Tailorshop scenario mainly indirectly. In addition, different types of system-specific knowledge independently contribute to predicting CPS performance. The results of Study 2 indicate that working memory capacity, assessed as an additional predictor, has no incremental validity beyond reasoning. We conclude that (1) cognitive abilities and prior knowledge are substantial predictors of CPS performance, and (2) in contrast to former and recent interpretations, there is insufficient evidence to consider CPS a unique ability construct. In the second part of the paper, we discuss our results in light of recent CPS research, which predominantly utilizes the

  13. Implantable neurotechnologies: bidirectional neural interfaces--applications and VLSI circuit implementations.

    Science.gov (United States)

    Greenwald, Elliot; Masters, Matthew R; Thakor, Nitish V

    2016-01-01

    A bidirectional neural interface is a device that transfers information into and out of the nervous system. This class of devices has potential to improve treatment and therapy in several patient populations. Progress in very large-scale integration has advanced the design of complex integrated circuits. System-on-chip devices are capable of recording neural electrical activity and altering natural activity with electrical stimulation. Often, these devices include wireless powering and telemetry functions. This review presents the state of the art of bidirectional circuits as applied to neuroprosthetic, neurorepair, and neurotherapeutic systems.

  14. Virtual planning of complex head and neck reconstruction results in satisfactory match between real outcomes and virtual models.

    Science.gov (United States)

    Hanken, Henning; Schablowsky, Clemens; Smeets, Ralf; Heiland, Max; Sehner, Susanne; Riecke, Björn; Nourwali, Ibrahim; Vorwig, Oliver; Gröbe, Alexander; Al-Dam, Ahmed

    2015-04-01

    The reconstruction of large facial bony defects using microvascular transplants requires extensive surgery to achieve full rehabilitation of form and function. The purpose of this study is to measure the agreement between virtual plans and the actual results of maxillofacial reconstruction. This retrospective cohort study included 30 subjects receiving maxillofacial reconstruction with a preoperative virtual planning. Parameters including defect size, position, angle and volume of the transplanted segments were compared between the virtual plan and the real outcome using paired t test. A total of 63 bone segments were transplanted. The mean differences between the virtual planning and the postoperative situation were for the defect sizes 1.17 mm (95 % confidence interval (CI) (-.21 to 2.56 mm); p = 0.094), for the resection planes 1.69 mm (95 % CI (1.26-2.11); p = 0.033) and 10.16° (95 % CI (8.36°-11.96°); p satisfactory postoperative results are the basis for an optimal functional and aesthetic reconstruction in a single surgical procedure. The technique should be further investigated in larger study populations and should be further improved.

  15. CR2-mediated activation of the complement alternative pathway results in formation of membrane attack complexes on human B lymphocytes

    DEFF Research Database (Denmark)

    Nielsen, C H; Marquart, H V; Prodinger, W M

    2001-01-01

    the alternative pathway. Blockade of the CR2 ligand-binding site with the monoclonal antibody FE8 resulted in 56 +/- 13% and 71 +/- 9% inhibition of the C3-fragment and MAC deposition, respectively, whereas the monoclonal antibody HB135, directed against an irrelevant CR2 epitope, had no effect. Blockade......Normal human B lymphocytes activate the alternative pathway of complement via complement receptor type 2 (CR2, CD21), that binds hydrolysed C3 (iC3) and thereby promotes the formation of a membrane-bound C3 convertase. We have investigated whether this might lead to the generation of a C5...... processes on CR2, indicate that MAC formation is a consequence of alternative pathway activation....

  16. The clover technique for the treatment of complex tricuspid valve insufficiency: midterm clinical and echocardiographic results in 66 patients.

    Science.gov (United States)

    Lapenna, Elisabetta; De Bonis, Michele; Verzini, Alessandro; La Canna, Giovanni; Ferrara, David; Calabrese, Maria Chiara; Taramasso, Maurizio; Alfieri, Ottavio

    2010-06-01

    This study assesses the results of the 'clover technique' (suturing together the middle point of the free edges of the tricuspid leaflets) for the treatment of tricuspid regurgitation (TR) due to severe prolapse or tethering. From 2001, 66 patients with severe TR due to prolapsing or tethered leaflets underwent 'clover repair'. Annuloplasty was associated in 64 patients (97%). The aetiology of TR was degenerative in 52 cases (79%), post-traumatic in eight (12%) and secondary to dilated cardiomyopathy (DCM) in six (9%). The main mechanism of TR was prolapse/flail of one leaflet in 15 patients (23%), of two leaflets in 31 (47%) and of all three leaflets in 14 (21%). The remaining six patients (9%) presented with severe leaflets' tethering. Four deaths (6%) occurred during hospitalisation and one patient died 3.6 years after surgery. Survival was 91 + or - 4.1% at 5 years. Follow-up of the 62 hospital survivors was 100% complete (mean length 3.5 + or - 1.6 years, range 13 months-7.1 years). At the last echocardiogram, no or mild TR was detected in 55 (88.7%) patients, moderate (2+/4+) in six (9.6%) and severe (4+/4+) in one patient (1.6%). Mean tricuspid valve area and gradient were 4.3 + or - 0.6 cm(2) and 2.8 + or -1.4 mmHg. In six patients, stress echocardiography was performed and no signs of tricuspid stenosis were detected. At the multivariable analysis, the degree of TR at hospital discharge was identified as the only predictor of TR > or = 2+ at follow-up. Midterm clinical and echocardiographic results confirm the role of the 'clover technique' in the surgical treatment of TR due to lesions, which are unlikely to be effectively treatable by annuloplasty alone. Copyright 2010 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.

  17. Disentangling the complexity of nitrous oxide cycling in coastal sediments: Results from a novel multi-isotope approach

    Science.gov (United States)

    Wankel, S. D.; Buchwald, C.; Charoenpong, C.; Ziebis, W.

    2014-12-01

    Although marine environments contribute approximately 30% of the global atmospheric nitrous oxide (N2O) flux, coastal systems appear to comprise a disproportionately large majority of the ocean-atmosphere flux. However, there exists a wide range of estimates and future projections of N2O production and emission are confounded by spatial and temporal variability of biological sources and sinks. As N2O is produced as an intermediate in both oxidative and reductive microbial processes and can also be consumed as an electron acceptor, a mechanistic understanding of the regulation of these pathways remains poorly understood. To improve our understanding of N2O dynamics in coastal sediments, we conducted a series of intact flow-through sediment core incubations (Sylt, Germany), while manipulating both the O2 and NO3- concentrations in the overlying water. Steady-state natural abundance isotope fluxes (δ15N and δ18O) of nitrate, nitrite, ammonium and nitrous oxide were monitored throughout the experiments. We also measured both the isotopomer composition (site preference (SP) of the 15N in N2O) as well as the Δ17O composition in experiments conducted with the addition of NO3- with an elevated Δ17O composition (19.5‰), which provide complementary information about the processes producing and consuming N2O. Results indicate positive N2O fluxes (to the water column) across all conditions and sediment types. Decreasing dissolved O2 to 30% saturation resulted in reduced N2O fluxes (5.9 ± 6.5 μmol m2 d-1) compared to controls (17.8 ± 6.5 μmol m-2 d-1), while the addition of 100 μM NO3- yielded higher N2O fluxes (49.0 ± 18.5 μmol m-2 d-1). In all NO3- addition experiments, the Δ17O signal from the NO3- was clearly observed in the N2O efflux implicating denitrification as a large source of N2O. However, Δ17O values were always lower (1.9 to 8.6‰) than the starting NO3- indicating an important role for nitrification-based N2O production and/or O isotope exchange

  18. On the impact of communication complexity in the design of parallel numerical algorithms

    Science.gov (United States)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  19. Ruthenium-bipyridine complexes bearing fullerene or carbon nanotubes: synthesis and impact of different carbon-based ligands on the resulting products.

    Science.gov (United States)

    Wu, Zhen-yi; Huang, Rong-bin; Xie, Su-yuan; Zheng, Lan-sun

    2011-09-07

    This paper discusses the synthesis of two carbon-based pyridine ligands of fullerene pyrrolidine pyridine (C(60)-py) and multi-walled carbon nanotube pyrrolidine pyridine (MWCNT-py) via 1,3-dipolar cycloaddition. The two complexes, C(60)-Ru and MWCNT-Ru, were synthesized by ligand substitution in the presence of NH(4)PF(6), and Ru(II)(bpy)(2)Cl(2) was used as a reaction precursor. Both complexes were characterized by mass spectroscopy (MS), elemental analysis, nuclear magnetic resonance (NMR) spectroscopy, infrared spectroscopy (IR), ultraviolet/visible spectroscopy (UV-VIS) spectrometry, Raman spectroscopy, X-ray photoelectron spectroscopy (XPS), thermogravimetric analysis (TGA), and cyclic voltammetry (CV). The results showed that the substitution way of C(60)-py is different from that of MWCNT-py. The C(60)-py and a NH(3) replaced a Cl(-) and a bipyridine in Ru(II)(bpy)(2)Cl(2) to produce a five-coordinate complex of [Ru(bpy)(NH(3))(C(60)-py)Cl]PF(6), whereas MWCNT-py replaced a Cl(-) to generate a six-coordinate complex of [Ru(bpy)(2)(MWCNT-py)Cl]PF(6). The cyclic voltammetry study showed that the electron-withdrawing ability was different for C(60) and MWCNT. The C(60) showed a relatively stronger electron-withdrawing effect with respect to MWCNT. This journal is © The Royal Society of Chemistry 2011

  20. Design and implementation of efficient low complexity biomedical artifact canceller for nano devices

    Directory of Open Access Journals (Sweden)

    Md Zia Ur RAHMAN

    2016-07-01

    Full Text Available In the current day scenario, with the rapid development of communication technology remote health care monitoring becomes as an intense research area. In remote health care monitoring, the primary aim is to facilitate the doctor with high resolution biomedical data. In order to cancel various artifacts in clinical environment in this paper we propose some efficient adaptive noise cancellation techniques. To obtain low computational complexity we combine clipping the data or error with Least Mean Square (LMS algorithm. This results sign regressor LMS (SRLMS, sign LMS (SLMS and sign LMS (SSLMS algorithms. Using these algorithms, we design Very-large-scale integration (VLSI architectures of various Biomedical Noise Cancellers (BNCs. In addition, the filtering capabilities of the proposed implementations are measured using real biomedical signals. Among the various BNCs tested, SRLMS based BNC is found to be better with reference to convergence speed, filtering capability and computational complexity. The main advantage of this technique is it needs only one multiplication to compute next weight. In this manner SRLMS based BNC is independent of filter length with reference to its computations. Whereas, the average signal to noise ratio achieved in the noise cancellation experiments are recorded as 7.1059dBs, 7.1776dBs, 6.2795dBs and 5.8847dBs for various BNCs based on LMS, SRLMS, SLMS and SSSLMS algorithms respectively. Based on the filtering characteristics, convergence and computational complexity, the proposed SRLMS based BNC architecture is well suited for nanotechnology applications.

  1. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  2. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems.

    Science.gov (United States)

    Indiveri, Giacomo

    2008-09-03

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  3. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    Directory of Open Access Journals (Sweden)

    Giacomo Indiveri

    2008-09-01

    Full Text Available Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  4. Recent Results with a segmented Hybrid Photon Detector for a novel parallax-free PET Scanner for Brain Imaging

    CERN Document Server

    Braem, André; Joram, Christian; Mathot, Serge; Séguinot, Jacques; Weilhammer, Peter; Ciocia, F; De Leo, R; Nappi, E; Vilardi, I; Argentieri, A; Corsi, F; Dragone, A; Pasqua, D

    2007-01-01

    We describe the design, fabrication and test results of a segmented Hybrid Photon Detector with integrated auto-triggering front-end electronics. Both the photodetector and its VLSI readout electronics are custom designed and have been tailored to the requirements of a recently proposed novel geometrical concept of a Positron Emission Tomograph. Emphasis is laid on the PET specific features of the device. The detector has been fabricated in the photocathode facility at CERN.

  5. CARS spectroscopy of the NaH2 collision complex: The nature of the Na(3 2P)H2 exciplex - ab initio calculations and experimental results

    International Nuclear Information System (INIS)

    Vivie-Riedle, R. de; Hering, P.; Kompa, K.L.

    1990-01-01

    CARS has been used to analyze the rovibronic state distribution of H 2 after collision with Na(3 2 P). New lines, which do not correspond to H 2 lines are observed in the CARS spectrum. The experiments point to the formation of a complex of Na(3 2 P)H 2 in A 2 B 2 symmetry. Ab initio calculations of the A 2 B 2 potential were performed. On this surface the vibrational spectra of the exciplex is evaluated. The observed lines can be attributed to vibrational transitions in the complex, in which combinational modes are involved. The connection of experimental and theoretical results indicates that a collisionally stabilized exciplex molecule is formed during the quenching process. (orig.)

  6. Ligation of the intersphincteric fistula tract (LIFT): a minimally invasive procedure for complex anal fistula: two-year results of a prospective multicentric study.

    Science.gov (United States)

    Sileri, Pierpaolo; Giarratano, Gabriella; Franceschilli, Luana; Limura, Elsa; Perrone, Federico; Stazi, Alessandro; Toscana, Claudio; Gaspari, Achille Lucio

    2014-10-01

    The surgical management of anal fistulas is still a matter of discussion and no clear recommendations exist. The present study analyses the results of the ligation of the intersphincteric fistula tract (LIFT) technique in treating complex anal fistulas, in particular healing, fecal continence, and recurrence. Between October 2010 and February 2012, a total of 26 consecutive patients underwent LIFT. All patients had a primary complex anal fistula and preoperatively all underwent clinical examination, proctoscopy, transanal ultrasonography/magnetic resonance imaging, and were treated with the LIFT procedure. For the purpose of this study, fistulas were classified as complex if any of the following conditions were present: tract crossing more than 30% of the external sphincter, anterior fistula in a woman, recurrent fistula, or preexisting incontinence. Patient's postoperative complications, healing time, recurrence rate, and postoperative continence were recorded during follow-up. The minimum follow-up was 16 months. Five patients required delayed LIFT after previous seton. There were no surgical complications. Primary healing was achieved in 19 patients (73%). Seven patients (27%) had recurrence presenting between 4 and 8 weeks postoperatively and required further surgical treatment. Two of them (29%) had previous insertion of a seton. No patients reported any incontinence postoperatively and we did not observe postoperative continence worsening. In our experience, LIFT appears easy to perform, is safe with no surgical complication, has no risk of incontinence, and has a low recurrence rate. These results suggest that LIFT as a minimally invasive technique should be routinely considered for patients affected by complex anal fistula. © The Author(s) 2013.

  7. Development of an integrated circuit VLSI used for time measurement and selective read out in the front end electronics of the DIRC for the Babar experience at SLAC; Developpement d'un circuit integre VLSI assurant mesure de temps et lecture selective dans l'electronique frontale du compteur DIRC de l'experience babar a slac

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1999-07-01

    This thesis deals with the design the development and the tests of an integrated circuit VLSI, supplying selective read and time measure for 16 channels. This circuit has been developed for a experiment of particles physics, BABAR, that will take place at SLAC (Stanford Linear Accelerator Center). A first part describes the physical stakes of the experiment, the electronic architecture and the place of the developed circuit in the research program. The second part presents the technical drawings of the circuit, the prototypes leading to the final design and the validity tests. (A.L.B.)

  8. Reconstructive surgery for complex midface trauma using titanium miniplates: Le Fort I fracture of the maxilla, zygomatico-maxillary complex fracture and nasomaxillary complex fracture, resulting from a motor vehicle accident.

    Science.gov (United States)

    Nicholoff, T J; Del Castillo, C B; Velmonte, M X

    Maxillofacial injuries resulting from trauma can be a challenge to the Maxillo-Facial Surgeon. Frequent causes of these injuries are attributed to automobile accidents, physical altercations, gunshot wounds, home accidents, athletic injuries, work injuries and other injuries. Motor vehicle accidents tend to be the primary cause of most midface fractures and lacerations due to the face hitting the dashboard, windshield and steering wheel or the back of the front seat for passengers in the rear. Seatbelts have been shown to drastically reduce the incidence and severity of these injuries. In the United States seatbelt laws have been enacted in several states thus markedly impacting on the reduction of such trauma. In the Philippines rare is the individual who wears seat belts. Metro city traffic, however, has played a major role in reducing daytime MVA related trauma, as usually there is insufficient speed in traffic areas to cause severe impact damage, the same however cannot be said for night driving, or for driving outside of the city proper where it is not uncommon for drivers to zip into the lane of on-coming traffic in order to overtake the car in front ... often at high speeds. Thus, the potential for severe maxillofacial injuries and other trauma related injuries increases in these circumstances. It is however unfortunate that outside of Metro Manila or other major cities there is no ready access to trauma or tertiary care centers, thus these injuries can be catastrophic if not addressed adequately. With the exception of Le Fort II and III craniofacial fractures, most maxillofacial injuries are not life threatening by themselves, and therefore treatment can be delayed until more serious cerebral or visceral, potentially life threatening injuries are addressed first. Our patient was involved in an MVA in Zambales, seen and stabilized in a provincial primary care center initially, then referred to a provincial secondary care center for further stabilization

  9. Communication complexity and information complexity

    Science.gov (United States)

    Pankratov, Denis

    Information complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity. In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute exact communication complexity of several related functions on n-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form theta( n). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems. In the second contribution, we use self-reduction methods to prove strong lower bounds on the information

  10. Genetic Spot Optimization for Peak Power Estimation in Large VLSI Circuits

    Directory of Open Access Journals (Sweden)

    Michael S. Hsiao

    2002-01-01

    Full Text Available Estimating peak power involves optimization of the circuit's switching function. The switching of a given gate is not only dependent on the output capacitance of the node, but also heavily dependent on the gate delays in the circuit, since multiple switching events can result from uneven circuit delay paths in the circuit. Genetic spot expansion and optimization are proposed in this paper to estimate tight peak power bounds for large sequential circuits. The optimization spot shifts and expands dynamically based on the maximum power potential (MPP of the nodes under optimization. Four genetic spot optimization heuristics are studied for sequential circuits. Experimental results showed an average of 70.7% tighter peak power bounds for large sequential benchmark circuits was achieved in short execution times.

  11. A Program of Research on Microfabrication Techniques for VLSI Magnetic Devices.

    Science.gov (United States)

    1981-10-01

    directions. The result is the perpendicular lattice expansion of a nonconstrained film. The mismatch is calculated using the following equation. 3d /d - -60...electrons from Mo to the 3d band of Co thereby lowering its moment (Hasegawa, at. &l. 1975). The exhange constant, A. which depends on the individual...encouraging comments. J. J. Fernandez de Castro also thanks the Mexican Government (CONSEJO NACIONAL DE CIENCIA Y TECNOLOGIA ) for the funds that it

  12. VLSI implementation of a bio-inspired olfactory spiking neural network.

    Science.gov (United States)

    Hsieh, Hung-Yi; Tang, Kea-Tiong

    2012-07-01

    This paper presents a low-power, neuromorphic spiking neural network (SNN) chip that can be integrated in an electronic nose system to classify odor. The proposed SNN takes advantage of sub-threshold oscillation and onset-latency representation to reduce power consumption and chip area, providing a more distinct output for each odor input. The synaptic weights between the mitral and cortical cells are modified according to an spike-timing-dependent plasticity learning rule. During the experiment, the odor data are sampled by a commercial electronic nose (Cyranose 320) and are normalized before training and testing to ensure that the classification result is only caused by learning. Measurement results show that the circuit only consumed an average power of approximately 3.6 μW with a 1-V power supply to discriminate odor data. The SNN has either a high or low output response for a given input odor, making it easy to determine whether the circuit has made the correct decision. The measurement result of the SNN chip and some well-known algorithms (support vector machine and the K-nearest neighbor program) is compared to demonstrate the classification performance of the proposed SNN chip.The mean testing accuracy is 87.59% for the data used in this paper.

  13. The VLSI design of the sub-band filterbank in MP3 decoding

    Science.gov (United States)

    Liu, Jia-Xin; Luo, Li

    2018-03-01

    The sub-band filterbank is one of the most important modules which has the largest amount of calculation in MP3 decoding. In order to save CPU resources and integrate the sub-band filterbank part into MP3 IP core, the hardware circuit of the sub-band filterbank module is designed in this paper. A fast algorithm suit for hardware implementation is proposed and achieved on FPGA development board. The results show that the sub-band filterbank function is correct in the case of using very few registers and the amount of calculation and ROM resources are reduced greatly.

  14. A VLSI Implementation of Four-Phase Lift Controller Using Verilog HDL

    Science.gov (United States)

    Kumar, Manish; Singh, Priyanka; Singh, Shesha

    2017-08-01

    With the advent of an era of staggering range of new technologies to provide ease of mobility and transportation elevators have become an essential component of all high rise buildings. An elevator is a type of vertical transportation that moves people between the floors of a high rise building. A four-Phase lift controller modeled on Verilog HDL code using Finite State Machine (FSM) has been presented in this paper. Verilog HDL helps in automated analysis and simulation of lift controller circuit. This design is based on synchronous input that operates on a fixed frequency. The Lift motion is controlled by means of accepting the destination floor level as input and generate control signal as output. In the proposed design a Verilog RTL code is developed and verified. Project Navigator of XILINX has been used as a code writing platform and results were simulated using Modelsim 5.4a simulator. This paper discusses the overall evolution of design and also discusses simulated results.

  15. Relation between film character and wafer alignment: critical alignment issues on HV device for VLSI manufacturing

    Science.gov (United States)

    Lo, Yi-Chuan; Lee, Chih-Hsiung; Lin, Hsun-Peng; Peng, Chiou-Shian

    1998-06-01

    Several continuous splits for wafer alignment target topography conditions to improve epitaxy film alignment were applied. The alignment evaluation among former layer pad oxide thickness (250 angstrom - 500 angstrom), drive oxide thickness (6000 angstrom - 10000 angstrom), nitride film thickness (600 angstrom - 1500 angstrom), initial oxide etch (fully wet etch, fully dry etch and dry plus wet etch) will be split to this experiment. Also various epitaxy deposition recipe such as: epitaxy source (SiHCl2 or SiCHCl3) and growth rate (1.3 micrometer/min approximately 2.0 micrometer/min) will be used to optimize the process window for alignment issue. All the reflectance signal and cross section photography of alignment target during NIKON stepper alignment process will be examined. Experimental results show epitaxy recipe plays an important role to wafer alignment. Low growth rate with good performance conformity epitaxy lead to alignment target avoid washout, pattern shift and distortion. All the results (signal monitor and film character) combined with NIKON's stepper standard laser scanning alignment system will be discussed in this paper.

  16. VLSI implementation of RSA encryption system using ancient Indian Vedic mathematics

    Science.gov (United States)

    Thapliyal, Himanshu; Srinivas, M. B.

    2005-06-01

    This paper proposes the hardware implementation of RSA encryption/decryption algorithm using the algorithms of Ancient Indian Vedic Mathematics that have been modified to improve performance. The recently proposed hierarchical overlay multiplier architecture is used in the RSA circuitry for multiplication operation. The most significant aspect of the paper is the development of a division architecture based on Straight Division algorithm of Ancient Indian Vedic Mathematics and embedding it in RSA encryption/decryption circuitry for improved efficiency. The coding is done in Verilog HDL and the FPGA synthesis is done using Xilinx Spartan library. The results show that RSA circuitry implemented using Vedic division and multiplication is efficient in terms of area/speed compared to its implementation using conventional multiplication and division architectures.

  17. A silicon pixel detector with routing for external VLSI read-out

    International Nuclear Information System (INIS)

    Thomas, S.L.; Seller, P.

    1988-07-01

    A silicon pixel detector with an array of 32 by 16 hexagonal pixels has been designed and is being built on high resistivity silicon. The detector elements are reverse biased diodes consisting of p-implants in an n-type substrate and are fully depleted from the front to the back of the wafer. They are intended to measure high energy ionising particles traversing the detector. The detailed design of the pixels, their layout and method of read-out are discussed. A number of test structures have been incorporated onto the wafer to enable measurements to be made on individual pixels together with a variety of active devices. The results will give a better understanding of the operation of the pixel array, and will allow testing of computer simulations of more elaborate structures for the future. (author)

  18. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Science.gov (United States)

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; Del Giudice, Paolo

    2015-10-01

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.

  19. Formation of shallow junctions for VLSI by ion implantation and rapid thermal annealing

    International Nuclear Information System (INIS)

    Oeztuerk, M.C.

    1988-01-01

    In this work, several techniques were studied to form shallow junctions in silicon by ion implantation. These include ion implantation through thin layers of silicon dioxide and ion implantation through a thick polycrystalline silicon layer. These techniques can be used to reduce the junction depth. Their main disadvantage is dopant loss in the surface layer. As an alternative, preamorphization of the Si substrate prior to boron implantation to reduce boron channeling was investigated. The disadvantage of preamorphization is the radiation damage introduced into the Si substrate using the implant. Preamorphization by silicon self-implantation has been studied before. The goal of this study was to test Ge as an alternative amorphizing agent. It was found that good-quality p + -n junctions can be formed by both boron and BF 2 ion implantation into Ge-preamorphized Si provided that the preamorphization conditions are optimized. If the amorphous crystalline interface is sufficiently close to the surface, it is possible to completely remove the end-of-range damage. If these defects are not removed and are left in the depletion region, they can result in poor-quality, leaky junctions

  20. Forest, Trees, Dynamics: Results from a novel Wisconsin Card Sorting Test variant Protocol for Studying Global-Local Attention and Complex Cognitive Processes

    Directory of Open Access Journals (Sweden)

    Benjamin eCowley

    2016-02-01

    Full Text Available BackgroundRecognition of objects and their context relies heavily on the integrated functioning of global and local visual processing. In a realistic setting such as work, this processing becomes a sustained activity, implying a consequent interaction with executive functions.MotivationThere have been many studies of either global-local attention or executive functions; however it is relatively novel to combine these processes to study a more ecological form of attention. We aim to explore the phenomenon of global-local processing during a task requiring sustained attention and working memory.MethodsWe develop and test a novel protocol for global-local dissociation, with task structure including phases of divided ('rule search' and selective ('rule found' attention, based on the Wisconsin Card Sorting Task.We test it in a laboratory study with 25 participants, and report on behaviour measures (physiological data was also gathered, but not reported here. We develop novel stimuli with more naturalistic levels of information and noise, based primarily on face photographs, with consequently more ecological validity.ResultsWe report behavioural results indicating that sustained difficulty when participants test their hypotheses impacts matching-task performance, and diminishes the global precedence effect. Results also show a dissociation between subjectively experienced difficulty and objective dimension of performance, and establish the internal validity of the protocol.ContributionWe contribute an advance in the state of the art for testing global-local attention processes in concert with complex cognition. With three results we establish a connection between global-local dissociation and aspects of complex cognition. Our protocol also improves ecological validity and opens options for testing additional interactions in future work.

  1. Transformational VLSI Design

    DEFF Research Database (Denmark)

    Rasmussen, Ole Steen

    constructed. It contains a semantical embedding of Ruby in Zermelo-Fraenkel set theory (ZF) implemented in the Isabelle theorem prover. A small subset of Ruby, called Pure Ruby, is embedded as a conservative extension of ZF and characterised by an inductive definition. Many useful structures used...

  2. Princeton VLSI Project.

    Science.gov (United States)

    1983-01-01

    for otherwise, since sc = xs2 . we would ELIE system. This algorithm also applies to SL) systems have been able to compute zec without looking at block...Prof. Peter R. Cappello of the CompuLer Science Department, University of California, Santa Barbara, I I Im i Ii - 19 - Caiifo:nia. Some of the work...multiple pro- cessors will not be as simple as the MMM ones. Acknowledgments. Several useful ideas and suggestions were made by Jim Gray, Peter Honneyman

  3. A modeling study of contaminant transport resulting from flooding of Pit 9 at the Radioactive Waste Management Complex, Idaho National Engineering Laboratory

    International Nuclear Information System (INIS)

    Magnuson, S.O.; Sondrup, A.J.

    1992-09-01

    A simulation study was conducted to determine if dissolved-phase transport due to flooding is a viable mechanism for explaining the presence of radionuclides in sedimentary interbeds below the Radioactive Waste Management Complex. In particular, the study focused on 241 Am migration due to flooding of Pit 9 in 1969. A kinetically-controlled source term model was used to estimate the mass of 241 Am that leached as a function of a variable surface infiltration rate. This mass release rate was then used in a numerical simulation of unsaturated flow and transport to estimate the advance due to flooding of the 241 Am front down towards the 110 ft interbed. The simulation included the effect of fractures by superimposing them onto elements that represented the basalt matrix. For the base case, hydraulic and transport parameters were assigned using the best available data. The advance of the 241 Am front due to flooding for this case was minimal, on the order of a few meters. This was due to the strong tendency for 241 Am to sorb onto both basalts and sediments. In addition to the base case simulation, a parametric sensitivity study was conducted which tested the effect of sorption in the fractures, in the kinetic source term, and in the basalt matrix. Of these, the only case which resulted in significant transport was when there was no sorption in the basalt matrix. The indication being that other processes such as transport by radiocolloids or organic complexation may have contributed. However, caution is advised in interpreting these results due to approximations in the numerical method that was used incorporate fractures into the simulation. The approximations are a result of fracture apertures being significantly smaller than the elements over which they are superimposed. The sensitivity of the 241 Am advance to the assumed hydraulic conductivity for the fractures was also tested

  4. Retinal nerve fiber layer and ganglion cell complex thickness assessment in patients with Alzheimer disease and mild cognitive impairment. Preliminary results

    Directory of Open Access Journals (Sweden)

    A. S. Tiganov

    2014-07-01

    Full Text Available Purpose: to investigate the retinal nerve fiber layer (RNFL and the macular ganglion cell complex (GCC in patients with Alzheimer`s disease and mild cognitive impairment.Methods: this study included 10 patients (20 eyes with Alzheimer`s disease, 10 patients with mild cognitive impairment and 10 age- and sex-matched healthy controls that had no history of dementia. All the subjects underwent psychiatric examination, including the Mini-Mental State Examination (MMSE, and complete ophthalmological examination, comprising optical coherence tomography and scanning laser polarimetry.Results: there was a significant decrease in GCC thickness in patients with Alzheimer`s disease compared to the control group, global loss volume of ganglion cells was higher than in control group. there was no significant difference among the groups in terms of RNFL thickness. Weak positive correlation of GCC thickness and MMSE results was observed.Conclusion: Our data confirm the retinal involvement in Alzheimer`s disease, as reflected by loss of ganglion cells. Further studies will clear up the role and contribution of dementia in pathogenesis of optic neuropathy.

  5. Lecithin Complex

    African Journals Online (AJOL)

    1Department of Food Science and Engineering, Xinyang College of Agriculture and ... Results: The UV and IR spectra of the complex showed an additive effect of polydatin-lecithin, in which .... Monochromatic Cu Ka radiation (wavelength =.

  6. Results of complex studies in radiation state of temporary areas for radioactive waste localization in the Chernobyl estrangement zone.; Rezul`taty kompleksnykh issledovanij radiatsionnogo sostoyaniya punktov vremennoj lokalizatsii radioaktivnykh otkhodov v Zone otchuzhdeniya ChAEhS.

    Energy Technology Data Exchange (ETDEWEB)

    Ledenev, A I; Ovcharov, P A; Mishunina, I B; Antropov, V M [Naukovo-Tekhnyichnij Tsentr z dezaktivatsyiyi ta kompleksnogo povodzhennya z radyioaktivnimi vyidkhodami, Zhovtyi Vodi (Ukraine)

    1994-12-31

    Describing complex studies in radiation state of temporary areas for radioactive waste localization in the nearest Chernobyl NPP zone, the paper provides results of these studies as well as results of inspection of radioactive waste hidden in 1990 - 1994.

  7. Radioactivity of heavy minerals and geochemistry of trace elements and radon, resulting from the weathering of the ophiolitic complex, Northwest of Syria

    International Nuclear Information System (INIS)

    Kattaa, B.; Al-Hilal, M.; Jubeli, Y.

    1999-06-01

    Geochemical and radiometric survey of stream sediments resulting from the weathering of ophiolitic complex at Al-Basit area were carried out. Several exploration techniques have been used to evaluate the radioactive elements and minerals in the area, and to estimate the significance of the radioactivity of the source rocks of these elements and minerals. determination of radioelements and some trace elements in stream sediments, in addition to gamma-ray spectrometry and radon gas measurement in water of springs and wells were carried out in the study area. The results show that the high values of radioactive elements and radon concentration are related to the occurrence of syenite nepheline and plageogranite, characterized by higher content of these elements compared to mafic and ultramafic rocks. Mineralogical study of the heavy minerals shows that the abundant minerals are pyroxene and amphibole, while less abundant minerals are iron oxides (limonite and hematite), chromite, ilmenite, olivine and magnetite. Rare minerals are zircon, apatite, barite, tourmaline and sphen. The absence of monazite, xenotime and thorite in the collected samples is mainly attributed to the very limited occurrence of their source rocks. However, this results is rather restricted to the collected samples. High concentration of magnetite and ilmenite in some samples was noted, in addition to the presence of mineral called galaxite which was not reported previously. Gold flake was also found in one of the samples. The study of grain morphology suggests that the heavy minerals were transported for short distance from their source rocks. Grain size analyse of heavy minerals reveals that the concentration of economic minerals such as chromite and ilmenite increases with the decrease of the grain size. (author)

  8. A high performance cost-effective digital complex correlator for an X-band polarimetry survey.

    Science.gov (United States)

    Bergano, Miguel; Rocha, Armando; Cupido, Luís; Barbosa, Domingos; Villela, Thyrso; Boas, José Vilas; Rocha, Graça; Smoot, George F

    2016-01-01

    The detailed knowledge of the Milky Way radio emission is important to characterize galactic foregrounds masking extragalactic and cosmological signals. The update of the global sky models describing radio emissions over a very large spectral band requires high sensitivity experiments capable of observing large sky areas with long integration times. Here, we present the design of a new 10 GHz (X-band) polarimeter digital back-end to map the polarization components of the galactic synchrotron radiation field of the Northern Hemisphere sky. The design follows the digital processing trends in radio astronomy and implements a large bandwidth (1 GHz) digital complex cross-correlator to extract the Stokes parameters of the incoming synchrotron radiation field. The hardware constraints cover the implemented VLSI hardware description language code and the preliminary results. The implementation is based on the simultaneous digitized acquisition of the Cartesian components of the two linear receiver polarization channels. The design strategy involves a double data rate acquisition of the ADC interleaved parallel bus, and field programmable gate array device programming at the register transfer mode. The digital core of the back-end is capable of processing 32 Gbps and is built around an Altera field programmable gate array clocked at 250 MHz, 1 GSps analog to digital converters and a clock generator. The control of the field programmable gate array internal signal delays and a convenient use of its phase locked loops provide the timing requirements to achieve the target bandwidths and sensitivity. This solution is convenient for radio astronomy experiments requiring large bandwidth, high functionality, high volume availability and low cost. Of particular interest, this correlator was developed for the Galactic Emission Mapping project and is suitable for large sky area polarization continuum surveys. The solutions may also be adapted to be used at signal processing

  9. Results of Hydraulic Tests in Miocene Tuffaceous Rocks at the C-Hole Complex, 1995 to 1997, Yucca Mountain, Nye County, Nevada

    Science.gov (United States)

    Geldon, Arthur L.; Umari, Amjad M.A.; Fahy, Michael F.; Earle, John D.; Gemmell, James M.; Darnell, Jon

    2002-01-01

    ; and storativity ranges from 0.00002 to 0.002. Transmissivity in the Miocene tuffaceous rocks decreases from 2,600 to 700 meters squared per day northwesterly across the 21-square-kilometer area affected by hydraulic tests at the C-hole complex. The average transmissivity of the tuffaceous rocks in this area, as determined from plots of drawdown in most or all observation wells as functions of time or distance from the pumping well, is 2,100 to 2,600 meters squared per day. Average storativity determined from these plot ranges is 0.0005 to 0.002. Hydraulic conductivity ranges from less than 2 to more than 10 meters per day; it is largest where prominent northerly trending faults are closely spaced or intersected by northwesterly trending faults. During hydraulic tests, the Miocene tuffaceous rocks functioned as a single aquifer. Drawdown occurred in all monitored intervals of the C-holes and other observation wells, regardless of the hydrogeologic interval being pumped. This hydraulic connection across geologic and lithostratigraphic contacts is believed to result from interconnected faults, fractures, and intervals with large matrix permeability. Samples of UE-25 c #3 water, analyzed from 1995 to 1997, seem to indicate that changes in the quality of the water pumped from that well are probably due solely to lateral variations in water quality within the tuffaceous rocks.

  10. Pelota interacts with HAX1, EIF3G and SRPX and the resulting protein complexes are associated with the actin cytoskeleton

    Directory of Open Access Journals (Sweden)

    Hoyer-Fender Sigrid

    2010-04-01

    Full Text Available Abstract Background Pelota (PELO is an evolutionary conserved protein, which has been reported to be involved in the regulation of cell proliferation and stem cell self-renewal. Recent studies revealed the essential role of PELO in the No-Go mRNA decay, by which mRNA with translational stall are endonucleotically cleaved and degraded. Further, PELO-deficient mice die early during gastrulation due to defects in cell proliferation and/or differentiation. Results We show here that PELO is associated with actin microfilaments of mammalian cells. Overexpression of human PELO in Hep2G cells had prominent effect on cell growth, cytoskeleton organization and cell spreading. To find proteins interacting with PELO, full-length human PELO cDNA was used as a bait in a yeast two-hybrid screening assay. Partial sequences of HAX1, EIF3G and SRPX protein were identified as PELO-interacting partners from the screening. The interactions between PELO and HAX1, EIF3G and SRPX were confirmed in vitro by GST pull-down assays and in vivo by co-immunoprecipitation. Furthermore, the PELO interaction domain was mapped to residues 268-385 containing the c-terminal and acidic tail domain. By bimolecular fluorescence complementation assay (BiFC, we found that protein complexes resulting from the interactions between PELO and either HAX1, EIF3G or SRPX were mainly localized to cytoskeletal filaments. Conclusion We could show that PELO is subcellularly localized at the actin cytoskeleton, interacts with HAX1, EIF3G and SRPX proteins and that this interaction occurs at the cytoskeleton. Binding of PELO to cytoskeleton-associated proteins may facilitate PELO to detect and degrade aberrant mRNAs, at which the ribosome is stalled during translation.

  11. Scoping-level Probabilistic Safety Assessment of a complex experimental facility: Challenges and first results from the application to a neutron source facility (MEGAPIE)

    International Nuclear Information System (INIS)

    Podofillini, L.; Dang, V.N.; Thomsen, K.

    2008-01-01

    This paper presents a scoping-level application of Probabilistic Safety Assessment (PSA) to selected systems of a complex experimental facility. In performing a PSA for this type of facility, a number of challenges arise, mainly due to the extensive use of electronic and programmable components and of one-of-a-kind components. The experimental facility is the Megawatt Pilot Target Experiment (MEGAPIE), which was hosted at the Paul Scherrer Institut (PSI). MEGAPIE demonstrated the feasibility of a liquid lead-bismuth target for spallation facilities at a proton beam power level of 1 MW. Given the challenges to estimate initiating event frequencies and failure event probabilities, emphasis is placed on the qualitative results obtainable from the PSA. Even though this does not allow a complete and appropriate characterization of the risk profile, some level of importance/significance evaluation was feasible, and practical and detailed recommendations on potential system improvements were derived. The second part of the work reports on a preliminary quantification of the facility risk. This provides more information on risk significance, which allows prioritizing the insights and recommendations obtained from the PSA. At the present stage, the limited knowledge on initiating and failure events is reflected in the uncertainties in their probabilities as well as in inputs quantified with bounding values. Detailed analyses to improve the quantification of these inputs, many of which turn out to be important contributors, were out of the scope of this study. Consequently, the reported results should be primarily considered as a demonstration of how quantification of the facility risk by a PSA can support risk-informed decisions, rather than precise figures of the facility risk

  12. A lecithin phosphatidylserine and phosphatidic acid complex (PAS) reduces symptoms of the premenstrual syndrome (PMS): Results of a randomized, placebo-controlled, double-blind clinical trial.

    Science.gov (United States)

    Schmidt, Katja; Weber, Nicole; Steiner, Meir; Meyer, Nadin; Dubberke, Anne; Rutenberg, David; Hellhammer, Juliane

    2018-04-01

    Many women experience emotional and physical symptoms around the time of ovulation and more so before menstruation interfering with their daily normal life also known as premenstrual syndrome (PMS). Recent observational data suggest that supplementation with Lipogen's phosphatidylserine (PS) and phosphatidic acid (PA) complex (PAS) alleviates these PMS symptoms. The aim of this study was to confirm these observations on the effects of PAS on PMS symptom severity within a controlled clinical trial setting. Forty women aged 18-45 years with a diagnosis of PMS were assigned to either take PAS (containing 400 mg PS & 400 mg PA per day) or a matching placebo. The study comprised 5 on-site visits including 1 baseline menstrual cycle followed by 3 treatment cycles. Treatment intake was controlled for by using an electronic device, the Medication Event Monitoring System (MEMS ® ). Primary outcome of the study was the PMS symptoms severity as assessed by using the Daily Record of Severity of Problems (DRSP). Further, SIPS questionnaire (a German version of the Premenstrual Symptoms Screening Tool (PSST)), salivary hormone levels (cortisol awakening response (CAR) and evening cortisol levels) as well as serum levels (cortisol, estradiol, progesterone and corticosteroid binding globulin (CBG)) were assessed. PMS symptoms as assessed by the DRSP Total score showed a significantly better improvement (p = 0.001) over a 3 cycles PAS intake as compared to placebo. In addition, PAS treated women reported a greater improvement in physical (p = 0.002) and depressive symptoms (p = 0.068). They also reported a lower reduction of productivity (p = 0.052) and a stronger decrease in interference with relationships with others (p = 0.099) compared to the placebo group. No other DRSP scale or item showed significant results. Likewise, the reduction in the number of subjects fulfilling PMS or premenstrual dysphoric disorder (PMDD) criteria as classified by the SIPS did not

  13. A familial Cri-du-Chat/5p deletion syndrome resulted from rare maternal complex chromosomal rearrangements (CCRs and/or possible chromosome 5p chromothripsis.

    Directory of Open Access Journals (Sweden)

    Heng Gu

    Full Text Available Cri-du-Chat syndrome (MIM 123450 is a chromosomal syndrome characterized by the characteristic features, including cat-like cry and chromosome 5p deletions. We report a family with five individuals showing chromosomal rearrangements involving 5p, resulting from rare maternal complex chromosomal rearrangements (CCRs, diagnosed post- and pre-natally by comprehensive molecular and cytogenetic analyses. Two probands, including a 4½-year-old brother and his 2½-year- old sister, showed no diagnostic cat cry during infancy, but presented with developmental delay, dysmorphic and autistic features. Both patients had an interstitial deletion del(5(p13.3p15.33 spanning ≈ 26.22 Mb. The phenotypically normal mother had de novo CCRs involving 11 breakpoints and three chromosomes: ins(11;5 (q23;p14.1p15.31,ins(21;5(q21;p13.3p14.1,ins(21;5(q21;p15.31p15.33,inv(7(p22q32dn. In addition to these two children, she had three first-trimester miscarriages, two terminations due to the identification of the 5p deletion and one delivery of a phenotypically normal daughter. The unaffected daughter had the maternal ins(11;5 identified prenatally and an identical maternal allele haplotype of 5p. Array CGH did not detect any copy number changes in the mother, and revealed three interstitial deletions within 5p15.33-p13.3, in the unaffected daughter, likely products of the maternal insertions ins(21;5. Chromothripsis has been recently reported as a mechanism drives germline CCRs in pediatric patients with congenital defects. We postulate that the unique CCRs in the phenotypically normal mother could resulted from chromosome 5p chromothripsis, that further resulted in the interstitial 5p deletions in the unaffected daughter. Further high resolution sequencing based analysis is needed to determine whether chromothripsis is also present as a germline structural variation in phenotypically normal individuals in this family.

  14. Surface complexation modeling of groundwater arsenic mobility: Results of a forced gradient experiment in a Red River flood plain aquifer, Vietnam

    Science.gov (United States)

    Jessen, Søren; Postma, Dieke; Larsen, Flemming; Nhan, Pham Quy; Hoa, Le Quynh; Trang, Pham Thi Kim; Long, Tran Vu; Viet, Pham Hung; Jakobsen, Rasmus

    2012-12-01

    Three surface complexation models (SCMs) developed for, respectively, ferrihydrite, goethite and sorption data for a Pleistocene oxidized aquifer sediment from Bangladesh were used to explore the effect of multicomponent adsorption processes on As mobility in a reduced Holocene floodplain aquifer along the Red River, Vietnam. The SCMs for ferrihydrite and goethite yielded very different results. The ferrihydrite SCM favors As(III) over As(V) and has carbonate and silica species as the main competitors for surface sites. In contrast, the goethite SCM has a greater affinity for As(V) over As(III) while PO43- and Fe(II) form the predominant surface species. The SCM for Pleistocene aquifer sediment resembles most the goethite SCM but shows more Si sorption. Compiled As(III) adsorption data for Holocene sediment was also well described by the SCM determined for Pleistocene aquifer sediment, suggesting a comparable As(III) affinity of Holocene and Pleistocene aquifer sediments. A forced gradient field experiment was conducted in a bank aquifer adjacent to a tributary channel to the Red River, and the passage in the aquifer of mixed groundwater containing up to 74% channel water was observed. The concentrations of As (SCM correctly predicts desorption for As(III) but for Si and PO43- it predicts an increased adsorption instead of desorption. The goethite SCM correctly predicts desorption of both As(III) and PO43- but failed in the prediction of Si desorption. These results indicate that the prediction of As mobility, by using SCMs for synthetic Fe-oxides, will be strongly dependent on the model chosen. The SCM based on the Pleistocene aquifer sediment predicts the desorption of As(III), PO43- and Si quite superiorly, as compared to the SCMs for ferrihydrite and goethite, even though Si desorption is still somewhat under-predicted. The observation that a SCM calibrated on a different sediment can predict our field results so well suggests that sediment based SCMs may be a

  15. Fractional Brownian motions via random walk in the complex plane and via fractional derivative. Comparison and further results on their Fokker-Planck equations

    International Nuclear Information System (INIS)

    Jumarie, Guy

    2004-01-01

    There are presently two different models of fractional Brownian motions available in the literature: the Riemann-Liouville fractional derivative of white noise on the one hand, and the complex-valued Brownian motion of order n defined by using a random walk in the complex plane, on the other hand. The paper provides a comparison between these two approaches, and in addition, takes this opportunity to contribute some complements. These two models are more or less equivalent on the theoretical standpoint for fractional order between 0 and 1/2, but their practical significances are quite different. Otherwise, for order larger than 1/2, the fractional derivative model has no counterpart in the complex plane. These differences are illustrated by an example drawn from mathematical finance. Taylor expansion of fractional order provides the expression of fractional difference in terms of finite difference, and this allows us to improve the derivation of Fokker-Planck equation and Kramers-Moyal expansion, and to get more insight in their relation with stochastic differential equations of fractional order. In the case of multi-fractal systems, the Fokker-Planck equation can be solved by using path integrals, and the fractional dynamic equations of the state moments of the stochastic system can be easily obtained. By combining fractional derivative and complex white noise of order n, one obtains a family of complex-valued fractional Brownian motions which exhibits long-range dependence. The conclusion outlines suggestions for further research, mainly regarding Lorentz transformation of fractional noises

  16. First results of U-Pb dating of metamorphic rocks of the Greater Antilles arc: age of the Mabujina complex (Cuba)

    International Nuclear Information System (INIS)

    Bibikova, E.V.; Somin, M.L.; Gracheva, T.V.; Makarov, V.A.; Mil'yan, G.; Shukolyukov, Yu.A.; AN SSSR, Moscow

    1988-01-01

    U-Pb-dating of zircons, entering the composition of metamorphic rocks of the Mabujina complex, was conducted in order to solve the problem concerming the place of metamorphic complexes in the structure and tectonic evolution of the Greater Antilles arc. The accuracy of uranium and lead determination was equal to ± 1%, the accuracy of lead isotopic ratio determination with the use of TSN-206A mass-spectrometer- ±0.15%. Isotope data showed, that all examined zircons crystallized about 100 mil. years ago

  17. Test beam results from the prototype L3 silicon microvertex detector

    International Nuclear Information System (INIS)

    Adam, A.; Adriani, O.; Ahlen, S.

    1993-11-01

    We report test beam results on the overall system performance of two modules of the L3 Silicon Microvertex Detector exposed to a 50 GeV pion beam. Each module consists of two AC coupled double-sided silicon strip detectors equipped with VLSI readout electronics. The associated data acquisition system comprises an 8 bit FADC, an optical data transmission circuit, a specialized data reduction processor and a synchronization module. A spatial resolution of 7.5 μm and 14 μm for the two coordinates and a detection efficiency in excess of 99% are measured. (orig.)

  18. Comparison between results of detailed tectonic studies on borehole core vs microresistivity images of borehole wall from gas-bearing shale complexes, Baltic Basin, Poland.

    Science.gov (United States)

    Bobek, Kinga; Jarosiński, Marek; Pachytel, Radomir

    2017-04-01

    , cemented with calcite, were clearly visible in scanner image. We have also observed significantly lower density of veins in core than in the XRMI that occurs systematically in one formation enriched with carbonate and dolomite. In this case, veins are not fractured in core and obliterated for bare eye by dolomitization, but are still contrastive in respect of electric resistance. Calculated density of bedding planes per 1 meter reveals systematically higher density of fractures observed on core than in the XRMI (depicted automatically by interpretation program). This difference may come from additional fracking due to relaxation of borehole core while recovery. Comparison of vertical joint fractures density with thickness of mechanical beds shows either lack of significant trends or a negative correlation (greater density of bedding fractures correspond to lower density of joints). This result, obtained for shale complexes contradict that derived for sandstone or limestone. Boundary between CLUs are visible on both: joint and bedding fracture density profiles. Considering small-scale faults and slickensides we have obtained good agreement between results of core and scanner interpretation. This study in the frame of ShaleMech Project funded by Polish Committee for Scientific Research is in progress and the results are preliminary.

  19. Absence of Non-histone Protein Complexes at Natural Chromosomal Pause Sites Results in Reduced Replication Pausing in Aging Yeast Cells

    Directory of Open Access Journals (Sweden)

    Marleny Cabral

    2016-11-01

    Full Text Available There is substantial evidence that genomic instability increases during aging. Replication pausing (and stalling at difficult-to-replicate chromosomal sites may induce genomic instability. Interestingly, in aging yeast cells, we observed reduced replication pausing at various natural replication pause sites (RPSs in ribosomal DNA (rDNA and non-rDNA locations (e.g., silent replication origins and tRNA genes. The reduced pausing occurs independent of the DNA helicase Rrm3p, which facilitates replication past these non-histone protein-complex-bound RPSs, and is independent of the deacetylase Sir2p. Conditions of caloric restriction (CR, which extend life span, also cause reduced replication pausing at the 5S rDNA and at tRNA genes. In aged and CR cells, the RPSs are less occupied by their specific non-histone protein complexes (e.g., the preinitiation complex TFIIIC, likely because members of these complexes have primarily cytosolic localization. These conditions may lead to reduced replication pausing and may lower replication stress at these sites during aging.

  20. Dealing with Complex and Ill-Structured Problems: Results of a Plan-Do-Check-Act Experiment in a Business Engineering Semester

    Science.gov (United States)

    Riis, Jens Ove; Achenbach, Marlies; Israelsen, Poul; Kyvsgaard Hansen, Poul; Johansen, John; Deuse, Jochen

    2017-01-01

    Challenged by increased globalisation and fast technological development, we carried out an experiment in the third semester of a global business engineering programme aimed at identifying conditions for training students in dealing with complex and ill-structured problems of forming a new business. As this includes a fuzzy front end, learning…

  1. A shift in nuclear state as the result of natural interspecific hybridization between two North American taxa of the basidiomycete complex Heterobasidion

    Science.gov (United States)

    Matteo Garbelotto; Paolo Gonthier; Rachel Linzer; Giovanni Nicolotti; William Otrosina

    2004-01-01

    A natural first generation hybrid fungus shows interspecific heterozygosity. The nuclear condition of a rare natural hybrid between two taxa of the Heterobasidion complex is investigated. Heterobasidion species are known to be either homokaryotic (haploid) or heterokaryotic (n + n), but heterokaryons are made up of both...

  2. Complex analysis and geometry

    CERN Document Server

    Silva, Alessandro

    1993-01-01

    The papers in this wide-ranging collection report on the results of investigations from a number of linked disciplines, including complex algebraic geometry, complex analytic geometry of manifolds and spaces, and complex differential geometry.

  3. Asymmetric Diels–Alder reaction with >C=P– functionality of the 2-phosphaindolizine-η1-P-aluminium(O-menthoxy dichloride complex: experimental and theoretical results

    Directory of Open Access Journals (Sweden)

    Rajendra K. Jangid

    2013-02-01

    Full Text Available The Diels–Alder reaction of the 2-phosphaindolizine-η1-P-aluminium(O-menthoxy dichloride complex with dimethylbutadiene was investigated experimentally and computationally. The >C=P– functionality of the complex reacts with 2,3-dimethylbutadiene with complete diastereoselectivity to afford [2 + 4] cycloadducts. Calculation of the model substrate, 3-methoxycarbonyl-1-methyl-2-phosphaindolizine-P-aluminium(O-menthoxy dichloride (7a, at the DFT (B3LYP/6-31+G* level reveals that the O-menthoxy moiety blocks the Re face of the >C=P– functionality, due to which the activation barrier of the Diels–Alder reaction of 7a with 1,3-butadiene, involving its attack from the Si face, is lower. It is found that in this case, the exo approach of the diene is slightly preferred over the endo approach.

  4. Complex manifolds

    CERN Document Server

    Morrow, James

    2006-01-01

    This book, a revision and organization of lectures given by Kodaira at Stanford University in 1965-66, is an excellent, well-written introduction to the study of abstract complex (analytic) manifolds-a subject that began in the late 1940's and early 1950's. It is largely self-contained, except for some standard results about elliptic partial differential equations, for which complete references are given. -D. C. Spencer, MathSciNet The book under review is the faithful reprint of the original edition of one of the most influential textbooks in modern complex analysis and geometry. The classic

  5. Nonperiodic activity of the human anaphase-promoting complex-Cdh1 ubiquitin ligase results in continuous DNA synthesis uncoupled from mitosis

    DEFF Research Database (Denmark)

    Lukas, C; Kramer, E R; Peters, J M

    2000-01-01

    Ubiquitin-proteasome-mediated destruction of rate-limiting proteins is required for timely progression through the main cell cycle transitions. The anaphase-promoting complex (APC), periodically activated by the Cdh1 subunit, represents one of the major cellular ubiquitin ligases which, in Saccha......Ubiquitin-proteasome-mediated destruction of rate-limiting proteins is required for timely progression through the main cell cycle transitions. The anaphase-promoting complex (APC), periodically activated by the Cdh1 subunit, represents one of the major cellular ubiquitin ligases which......, in Saccharomyces cerevisiae and Drosophila spp., triggers exit from mitosis and during G(1) prevents unscheduled DNA replication. In this study we investigated the importance of periodic oscillation of the APC-Cdh1 activity for the cell cycle progression in human cells. We show that conditional interference...... transition and lowered the rate of DNA synthesis during S phase, some of the activities essential for DNA replication became markedly amplified, mainly due to a progressive increase of E2F-dependent cyclin E transcription and a rapid turnover of the p27(Kip1) cyclin-dependent kinase inhibitor. Consequently...

  6. Dealing with complex and ill-structured problems: results of a Plan-Do-Check-Act experiment in a business engineering semester

    Science.gov (United States)

    Riis, Jens Ove; Achenbach, Marlies; Israelsen, Poul; Kyvsgaard Hansen, Poul; Johansen, John; Deuse, Jochen

    2017-07-01

    Challenged by increased globalisation and fast technological development, we carried out an experiment in the third semester of a global business engineering programme aimed at identifying conditions for training student in dealing with complex and ill-structured problems of forming a new business. As this includes a fuzzy front end, learning cannot be measured in traditional, quantitative terms; therefore, we have explored the use of reflection to convert tacit knowledge to explicit knowledge. The experiment adopted a Plan-Do-Check-Act approach and concluded with developing a plan for new learning initiatives in the subsequent year's semester. The findings conclude that (1) problem-based learning develops more competencies than ordinarily measured at the examination, especially, the social/communication and personal competencies are developed; (2) students are capable of dealing with a complex and ambiguous problem, if properly guided. Four conditions were identified; (3) most students are not conscious of their learning, but are able to reflect if properly encouraged; and (4) improving engineering education should be considered as an organisational learning process.

  7. Demonstration of the Safety and Feasibility of Robotically Assisted Percutaneous Coronary Intervention in Complex Coronary Lesions: Results of the CORA-PCI Study (Complex Robotically Assisted Percutaneous Coronary Intervention).

    Science.gov (United States)

    Mahmud, Ehtisham; Naghi, Jesse; Ang, Lawrence; Harrison, Jonathan; Behnamfar, Omid; Pourdjabbar, Ali; Reeves, Ryan; Patel, Mitul

    2017-07-10

    The aims of this study were to evaluate the feasibility and technical success of robotically assisted percutaneous coronary intervention (R-PCI) for the treatment of coronary artery disease (CAD) in clinical practice, especially in complex lesions, and to determine the safety and clinical success of R-PCI compared with manual percutaneous coronary intervention (M-PCI). R-PCI is safe and feasible for simple coronary lesions. The utility of R-PCI for complex coronary lesions is unknown. All consecutive PCI procedures performed robotically (study group) or manually (control group) over 18 months were included. R-PCI technical success, defined as the completion of the procedure robotically or with partial manual assistance and without a major adverse cardiovascular event, was determined. Procedures ineligible for R-PCI (i.e., atherectomy, planned 2-stent strategy for bifurcation lesion, chronic total occlusion requiring hybrid approach) were excluded for analysis from the M-PCI group. Clinical success, defined as completion of the PCI procedure without a major adverse cardiovascular event, procedure time, stent use, and fluoroscopy time were compared between groups. A total of 315 patients (mean age 67.7 ± 11.8 years; 78% men) underwent 334 PCI procedures (108 R-PCIs, 157 lesions, 78.3% type B2/C; 226 M-PCIs, 336 lesions, 68.8% type B2/C). Technical success with R-PCI was 91.7% (rate of manual assistance 11.1%, rate of manual conversion 7.4%, rate of major adverse cardiovascular events 0.93%). Clinical success (99.1% with R-PCI vs. 99.1% with M-PCI; p = 1.00), stent use (stents per procedure 1.59 ± 0.79 with R-PCI vs. 1.54 ± 0.75 with M-PCI; p = 0.73), and fluoroscopy time (18.2 ± 10.4 min with R-PCI vs. 19.2 ± 11.4 min with M-PCI; p = 0.39) were similar between the groups, although procedure time was longer in the R-PCI group (44:30 ± 26:04 min:s vs. 36:34 ± 23:03 min:s; p = 0.002). Propensity-matched analysis confirmed that procedure time was longer

  8. Complex analysis

    CERN Document Server

    Freitag, Eberhard

    2005-01-01

    The guiding principle of this presentation of ``Classical Complex Analysis'' is to proceed as quickly as possible to the central results while using a small number of notions and concepts from other fields. Thus the prerequisites for understanding this book are minimal; only elementary facts of calculus and algebra are required. The first four chapters cover the essential core of complex analysis: - differentiation in C (including elementary facts about conformal mappings) - integration in C (including complex line integrals, Cauchy's Integral Theorem, and the Integral Formulas) - sequences and series of analytic functions, (isolated) singularities, Laurent series, calculus of residues - construction of analytic functions: the gamma function, Weierstrass' Factorization Theorem, Mittag-Leffler Partial Fraction Decomposition, and -as a particular highlight- the Riemann Mapping Theorem, which characterizes the simply connected domains in C. Further topics included are: - the theory of elliptic functions based on...

  9. Surface complexation modeling of groundwater arsenic mobility: Results of a forced gradient experiment in a Red River flood plain aquifer, Vietnam

    DEFF Research Database (Denmark)

    Jessen, Søren; Postma, Dieke; Larsen, Flemming

    2012-01-01

    , suggesting a comparable As(III) affinity of Holocene and Pleistocene aquifer sediments. A forced gradient field experiment was conducted in a bank aquifer adjacent to a tributary channel to the Red River, and the passage in the aquifer of mixed groundwater containing up to 74% channel water was observed......Three surface complexation models (SCMs) developed for, respectively, ferrihydrite, goethite and sorption data for a Pleistocene oxidized aquifer sediment from Bangladesh were used to explore the effect of multicomponent adsorption processes on As mobility in a reduced Holocene floodplain aquifer......(III) while PO43− and Fe(II) form the predominant surface species. The SCM for Pleistocene aquifer sediment resembles most the goethite SCM but shows more Si sorption. Compiled As(III) adsorption data for Holocene sediment was also well described by the SCM determined for Pleistocene aquifer sediment...

  10. Long-term clinical and angiographic results of Sirolimus-Eluting Stent in Complex Coronary Chronic Total Occlusion Revascularization: the SECTOR registry.

    Science.gov (United States)

    Galassi, Alfredo R; Tomasello, Salvatore D; Costanzo, Luca; Campisano, Maria B; Barrano, Giombattista; Tamburino, Corrado

    2011-10-01

    Drug-eluting stents showed a better angiographic and clinical outcome in comparison with bare metal stent in chronic total occlusions (CTOs) percutaneous revascularization, however, great concerns still remain regarding the rate of restenosis and reocclusion in comparison with nonocclusive lesions. To evaluate angiographic and clinical outcomes after sirolimus-eluting stent (SES) implantation in the setting of a "real world" series of complex CTOs. From January 2006 to December 2008, 172 consecutive patients with 179 CTO lesions were enrolled into registry. Among these, successful recanalization was obtained in 144 lesions (80.4%) with exclusive SES implantation in 104 lesions. The 9-12 months angiographic follow-up was executed in 85.5% of lesions with evidence of angiographic binary restenosis in 16.8% of lesions. Total stent length and number of stent implanted were recognized as independent predictors of restenosis (odds ratio [OR] 4.7, 95% confidence interval [CI] 1.28-107.09, P = 0.02) and (OR 5.8, 95% CI 1.39-23.55, P = 0.01), respectively.The 2-year clinical follow-up showed rates of target lesion revascularization, non-Q wave myocardial infarction, and total major adverse cardiovascular events (MACEs) of 11.1%, 2%, and 13.1%, respectively. Cox proportional-hazard analysis showed diabetes as independent predictor of MACEs (hazard ratio [HR] 4.832; 95% CI, 0.730-0.861; P = 0.028). Data from this registry demonstrate the long-term efficacy and safety of SES implantation after complex CTOs recanalization. ©2011, Wiley Periodicals, Inc.

  11. Hydrous ferric oxide: evaluation of Cd-HFO surface complexation models combining Cd(K) EXAFS data, potentiometric titration results, and surface site structures identified from mineralogical knowledge.

    Science.gov (United States)

    Spadini, Lorenzo; Schindler, Paul W; Charlet, Laurent; Manceau, Alain; Vala Ragnarsdottir, K

    2003-10-01

    The surface properties of ferrihydrite were studied by combining wet chemical data, Cd(K) EXAFS data, and a surface structure and protonation model of the ferrihydrite surface. Acid-base titration experiments and Cd(II)-ferrihydrite sorption experiments were performed within 3titration data could be adequately modeled by triple bond Fe- OH(2)(+1/2)-H(+)triple bond Fe-OH(-1/2),logk((int))=-8.29, assuming the existence of a unique intrinsic microscopic constant, logk((int)), and consequently the existence of a single significant type of acid-base reactive functional groups. The surface structure model indicates that these groups are terminal water groups. The Cd(II) data were modeled assuming the existence of a single reactive site. The model fits the data set at low Cd(II) concentration and up to 50% surface coverage. At high coverage more Cd(II) ions than predicted are adsorbed, which is indicative of the existence of a second type of site of lower affinity. This agrees with the surface structure and protonation model developed, which indicates comparable concentrations of high- and low-affinity sites. The model further shows that for each class of low- and high-affinity sites there exists a variety of corresponding Cd surface complex structure, depending on the model crystal faces on which the complexes develop. Generally, high-affinity surface structures have surface coordinations of 3 and 4, as compared to 1 and 2 for low-affinity surface structures.

  12. Sub-Scale Orion Parachute Test Results from the National Full-Scale Aerodynamics Complex 80- By 120-ft Wind Tunnel

    Science.gov (United States)

    Anderson, Brian P.; Greathouse, James S.; Powell, Jessica M.; Ross, James C.; Schairer, Edward T.; Kushner, Laura; Porter, Barry J.; Goulding, Patrick W., II; Zwicker, Matthew L.; Mollmann, Catherine

    2017-01-01

    A two-week test campaign was conducted in the National Full-Scale Aerodynamics Complex 80 x 120-ft Wind Tunnel in support of Orion parachute pendulum mitigation activities. The test gathered static aerodynamic data using an instrumented, 3-tether system attached to the parachute vent in combination with an instrumented parachute riser. Dynamic data was also gathered by releasing the tether system and measuring canopy performance using photogrammetry. Several canopy configurations were tested and compared against the current Orion parachute design to understand changes in drag performance and aerodynamic stability. These configurations included canopies with varying levels and locations of geometric porosity as well as sails with increased levels of fullness. In total, 37 runs were completed for a total of 392 data points. Immediately after the end of the testing campaign a down-select decision was made based on preliminary data to support follow-on sub-scale air drop testing. A summary of a more rigorous analysis of the test data is also presented.

  13. Breast irradiation causes pallor in the nipple-areolar complex in women with Celtic skin type (result from the St. George and Wollongong randomised breast boost trial)

    International Nuclear Information System (INIS)

    Lee, Yoo Young Dominique; Hau, Eric; Browne, Lois H.; Chin, Yaw; Lee, Jessica; Szwajcer, Alison; Cail, Stacy; Nolan, David N.; Graham, Peter H.

    2014-01-01

    The nipple-areolar complex (NAC) has special histological properties with higher melanocyte concentration than breast skin. To date, there are no data describing the late effects on the NAC following breast-conserving therapy (BCT). This study evaluated colour changes in the NAC in patients treated with breast-conserving surgery and adjuvant radiotherapy after 5 years. Digital photographs obtained at 5 years following breast irradiation from the St. George and Wollongong (SGW) trial (NCT00138814) were evaluated by five experts using an iPad® (Apple Inc., Cupertino, CA, USA) application specifically created for this study. The SGW trial randomised 688 patients with Tis-2, N0-1, M0 carcinoma to the control arm of 50Gy in 25 fractions and boost arm of 45Gy in 25 fractions and 16 Gy in 8 fractions electron boost. A total of 141/372 (38%) patients had altered NAC (86% lighter, 10% darker). Patients with Celtic skin type had increased likelihood of having an altered NAC (odds ratio (OR), 1.75 (CI 1.1–2.7, P=0.011)). On subgroup analysis, those with Celtic skin type receiving biologically equivalent dose (BED) Gy3 ≥ 80Gy had OR of 3.03 (95% CI 1.2–7.5, P=0.016) for having altered colour. There was a dose response with more profound changes seen in the NAC compared with irradiated breast skin if BED Gy3 ≥ 80Gy with OR of 2.42 (95% CI 1.1–5.6, P=0.036). In this Caucasian BCT population, over 30% of patients developed lighter NAC and more commonly in women with Celtic skin type. The degree of this effect increased with higher radiation dose.

  14. Determination of crystal and molecular structures of two complexes resulting from the reaction between bis (diethyl muconate) monocarbonyliron and monodentate nitrogenated heterocyclic ligands, by X-ray diffractometry

    International Nuclear Information System (INIS)

    Inumaru, A.T.

    1983-01-01

    The crystal structures of (diethylmuconate) (quinoline) dicarbonyliron and (diethyl muconate) (pyrazine) dicarbonyliron have been determined from diffractometric X-ray data using the heavy atom method. (Diethyl muconate) (quinoline) dicarbonyliron. C 21 H 21 O 6 NFe. Crystal system: triclinic; space group P1 sup(-); a=7.766(2), b=9.664(2), c=14.917(2)A sup(o), α=84.12(2), β=74.99(2), γ=76.54(2) sup(o), V=1050.6(5)A sup(o) 3 , Z=2, D sub(c)=1.382 Mg m -3 , lambda(M sub(o) K sub(α))=0.71073A sup(o), μ(M sub(o) K sub(α))=0.78mm -1 . The final R-factor was 0.058 for 1589 reflections with I>3σ(I). (Diethyl muconate) (pyrazine) dicarbonyliron. C 16 H 18 O 6 N 2 Fe. Crystal system: monoclinic; space group P2 1 /C; a=10.390(2), b=19.754(4), c=9.051(2)A sup(o), β=108.27(2) sup(o), V=1764(1)A sup(o) 3 , Z=4, D sub(c)=1.469 Mg m -3 , lambda(M sub(o) K sub(α))=0.71073A sup(o), μ(M sub(o) K sub(α))=0.98mm -1 . The final R-factor was 0.066 for 967 reflections with I>3σ(I). In both compunds the Fe sup(o) atom is penta coordinated in the form of a quadrangular pyramid, being that the nitrogen atom occupies the apical position in the pyrazine complex and one of the basal positions in the quinolinecase. (Author) [pt

  15. Radiological results for samples collected on paired glass- and cellulose-fiber filters at the Sandia complex, Tonopah Test Range, Nevada

    International Nuclear Information System (INIS)

    Mizell, Steve A.; Shadel, Craig A.

    2016-01-01

    Airborne particulates are collected at U.S. Department of Energy sites that exhibit radiological contamination on the soil surface to help assess the potential for wind to transport radionuclides from the contamination sites. Collecting these samples was originally accomplished by drawing air through a cellulose-fiber filter. These filters were replaced with glass-fiber filters in March 2011. Airborne particulates were collected side by side on the two filter materials between May 2013 and May 2014. Comparisons of the sample mass and the radioactivity determinations for the side-by-side samples were undertaken to determine if the change in the filter medium produced significant results. The differences in the results obtained using the two filter types were assessed visually by evaluating the time series and correlation plots and statistically by conducting a nonparametric matched-pair sign test. Generally, the glass-fiber filters collect larger samples of particulates and produce higher radioactivity values for the gross alpha, gross beta, and gamma spectroscopy analyses. However, the correlation between the radioanalytical results for the glass-fiber filters and the cellulose-fiber filters was not strong enough to generate a linear regression function to estimate the glass-fiber filter sample results from the cellulose-fiber filter sample results.

  16. Developing Software to “Track and Catch” Missed Follow-up of Abnormal Test Results in a Complex Sociotechnical Environment

    Science.gov (United States)

    Smith, M.; Murphy, D.; Laxmisan, A.; Sittig, D.; Reis, B.; Esquivel, A.; Singh, H.

    2013-01-01

    Summary Background Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider’s prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. Objectives The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. Methods We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA’s EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Results Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility’s “test” EHR system, thus demonstrating technical compatibility. Conclusion To address the factors involved in missed

  17. Managing Complexity

    Energy Technology Data Exchange (ETDEWEB)

    Chassin, David P.; Posse, Christian; Malard, Joel M.

    2004-08-01

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today’s most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically-based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This paper explores the state of the art in the use physical analogs for understanding the behavior of some econophysical systems and to deriving stable and robust control strategies for them. In particular we review and discussion applications of some analytic methods based on the thermodynamic metaphor according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood.

  18. Complexity explained

    CERN Document Server

    Erdi, Peter

    2008-01-01

    This book explains why complex systems research is important in understanding the structure, function and dynamics of complex natural and social phenomena. Readers will learn the basic concepts and methods of complex system research.

  19. Application of ASAR-ENVISAT Data for Monitoring Andean Volcanic Activity : Results From Lastarria-Azufre Volcanic Complex (Chile-Argentina)

    Science.gov (United States)

    Froger, J.; Remy, D.; Bonvalot, S.; Franco Guerra, M.

    2005-12-01

    Since the pioneer study on Mount Etna by Massonnet et al., in 1995, several works have illustrated the promising potentiality of Synthetic Aperture Radar Interferometry (INSAR) for the monitoring of volcanoes. In the case of wide, remote or hazardous volcanic areas, in particular, INSAR represents a safer and more economic way to acquire measurements than from ground based geodetic networks. Here we present the preliminary results of an interferometric survey made with ASAR-ENVISAT data on a selection of South American volcanoes where deformation signals had been previously evidenced or are expected. An interesting result is the detection of a present-day active ground deformation on the Azufre-Lastarria area (Chile-Argentina) indicating that process, identified during 1998-2000 by Pritchard and Simmons (2004) from ERS data, is still active. The phase signal visible on ASAR interferograms (03/2003-06/2005) is roughly elliptical with a 45 km NNE-SSW major axis. Its amplitude increases as a function of time and is compatible with ground uplift in the line of sight of the satellite. The ASAR time series (up to 840 days, 7 ASAR images) indicates variable deformation rate that might confirm the hypothesis of a non uniform deformation process. We investigated the origin and the significance of the deformation using various source modelling strategies (analytical and numerical). The observed deformation can be explained by the infilling of an elliptical magmatic reservoir lying between 7 and 10 km depth. The deformation could represent the first stage of a new caldera forming as it is correlated with a large, although subtle, topographic depression surrounded by a crown of monogenetic centers. A short wavelength inflation has also been detected on Lastaria volcano. It could result from the on-going infilling of a small subsurface magmatic reservoir, eventually supplied by the deeper one. All these observations point out the need of a closer monitoring of this area in

  20. Complex tibial plateau fractures treated by hybrid external fixation system: A correlation of followup computed tomography derived quality of reduction with clinical results

    Directory of Open Access Journals (Sweden)

    Konstantinos Kateros

    2018-01-01

    Full Text Available Background: Tibial plateau fractures are common due to high energy injuries. The principles of treatment include respect for the soft tissues, restoring the congruity of the articular surface and reduction of the anatomic alignment of the lower limb to enable early movement of the knee joint. There are various surgical fixation methods that can achieve these principles of treatment. Recognition of the particular fracture pattern is important, as this guides the surgical approach required in order to adequately stabilize the fracture. This study evaluates the results of the combined treatment of external fixator and limited internal fixation along with the advantages using postoperative computed tomography (CT scan after implant removal. Materials and Methods: 55 patients with a mean age of 42 years (range 17–65 years with tibial plateau fracture, were managed in our institution between October 2010 and September 2013., Twenty fractures were classified as Schatzker VI and 35 as Schatzker V. There were 8 open fractures (2 Gustilo Anderson 3A and 6 Gustilo Anderson 2. All fractures were treated with closed reduction and hybrid external fixation (n = 21/38.2% or with minimal open reduction internal fixation and a hybrid system (n = 34/61.8%. After the removal of the fixators, CT-scan was programmed for all the cases, for correlation with the results. At final followup, the American Knee Society Score (AKSS was administered. Results: All patients were evaluated with a minimum of 12 months (range 12–21 months followup. Average time to union was 15.5 weeks (range 13–19 weeks. The postoperative joint congruity as evaluated in the postoperative CT-scan was 5° in 19 cases (35%. Patients with residual joint depression 4.5 mm displayed a 100% chance of getting poor-fair scores both in AKSS knee and AKSS function score. The association of a postoperative mechanical axis within 5° of the contralateral limb and improved knee scores was statistically

  1. Complex chemistry

    International Nuclear Information System (INIS)

    Kim, Bong Gon; Kim, Jae Sang; Kim, Jin Eun; Lee, Boo Yeon

    2006-06-01

    This book introduces complex chemistry with ten chapters, which include development of complex chemistry on history coordination theory and Warner's coordination theory and new development of complex chemistry, nomenclature on complex with conception and define, chemical formula on coordination compound, symbol of stereochemistry, stereo structure and isomerism, electron structure and bond theory on complex, structure of complex like NMR and XAFS, balance and reaction on solution, an organo-metallic chemistry, biology inorganic chemistry, material chemistry of complex, design of complex and calculation chemistry.

  2. A complex multilevel attack on Pseudomonas aeruginosa algT/U expression and AlgT/U activity results in the loss of alginate production

    DEFF Research Database (Denmark)

    Sautter, Robert; Ramos, Damaris; Schneper, Lisa

    2012-01-01

    Infection by the opportunistic pathogen Pseudomonas aeruginosa is a leading cause of morbidity and mortality seen in cystic fibrosis (CF) patients. This is mainly due to the genotypic and phenotypic changes of the bacteria that cause conversion from a typical nonmucoid to a mucoid form in the CF...... lung. Mucoid conversion is indicative of overproduction of a capsule-like polysaccharide called alginate. The alginate-overproducing (Alg+) mucoid phenotype seen in the CF isolates is extremely unstable. Low oxygen tension growth of mucoid variants readily selects for nonmucoid variants. The switching...... complementation and sequencing of one Group B sap mutant, sap22, revealed that the nonmucoid phenotype was due to the presence of a mutation in PA3257. PA3257 encodes a putative periplasmic protease. Mutation of PA3257 resulted in decreased algT/U expression. Thus, inhibition of algT/U is a primary mechanism...

  3. Blind pre-analysis of the main building complex WWER-440/213 Paks for comparison of analytical and experimental results obtained by explosive testing (task 7a of workplan 95/96)

    International Nuclear Information System (INIS)

    1999-01-01

    Within the research programme on Benchmark studies of seismic analysis of WWER type reactors the blind pre-analysis must be prepared for the main building complex of Paks NPP, based on given excitations derived from explosion tests. The aim of the investigation was to validate different idealization concepts (mathematical models for the idealization of the structures and the soil) as well as investigation procedures (time domain and frequency domain analysis) and finally the software tools by comparing dynamic properties (eigenfrequencies, eigenmodes, modal values) and structural response results (time histories and response spectra). This report contains results of the blind pre-analysis performed by using three dimensional idealization of the main building complex (reactor building, turbine house, galleries) by means of time and frequency domian calculation procedures

  4. Development of a CMOS time memory cell VLSI and CAMAC module with 0.5 ns resolution

    International Nuclear Information System (INIS)

    Arai, Y.; Ikeno, M.; Matsumura, T.

    1992-01-01

    A CMOS time-to-digital converter chip, the Time Memory Cell (TMC), for high-rate wire chamber application has been developed. The chip has a timing resolution of 0.52 ns, dissipates only 7 mW/channel, and contains 4 channels in a chip. Each channel has 1024 memory locations which act as a buffer 1μs deep. The chip was fabricated in a 0.8 μm CMOS process and is 5.0 mm by 5.6 mm. Using the TMC chip, a CAMAC module with 32 input channels was developed. This module is designed to operate in both 'Common Start' and 'Common Stop' modes. The circuit of the module and test results are described in this paper

  5. VLSI implementation of MIMO detection for 802.11n using a novel adaptive tree search algorithm

    International Nuclear Information System (INIS)

    Yao Heng; Jian Haifang; Zhou Liguo; Shi Yin

    2013-01-01

    A 4×4 64-QAM multiple-input multiple-output (MIMO) detector is presented for the application of an IEEE 802.11n wireless local area network. The detectoris the implementation of a novel adaptive tree search(ATS) algorithm, and multiple ATS cores need to be instantiated to achieve the wideband requirement in the 802.11n standard. Both the ATS algorithm and the architectural considerations are explained. The latency of the detector is 0.75 μs, and the detector has a gate count of 848 k with a total of 19 parallel ATS cores. Each ATS core runs at 67 MHz. Measurement results show that compared with the floating-point ATS algorithm, the fixed-point implementation achieves a loss of 0.9 dB at a BER of 10 −3 . (semiconductor integrated circuits)

  6. Low-power low-noise mixed-mode VLSI ASIC for infinite dynamic range imaging applications

    Science.gov (United States)

    Turchetta, Renato; Hu, Y.; Zinzius, Y.; Colledani, C.; Loge, A.

    1998-11-01

    -offset compensation, iii) 15-bit pseudo-random counter. The power consumption is 255 (mu) W/channel for a peaking time of 300 ns and an equivalent noise charge of 185 + 97*Cd electrons rms. Simulation and experimental result as well as imaging results will be presented.

  7. Spiking Neural Classifier with Lumped Dendritic Nonlinearity and Binary Synapses: A Current Mode VLSI Implementation and Analysis.

    Science.gov (United States)

    Bhaduri, Aritra; Banerjee, Amitava; Roy, Subhrajit; Kar, Sougata; Basu, Arindam

    2018-03-01

    We present a neuromorphic current mode implementation of a spiking neural classifier with lumped square law dendritic nonlinearity. It has been shown previously in software simulations that such a system with binary synapses can be trained with structural plasticity algorithms to achieve comparable classification accuracy with fewer synaptic resources than conventional algorithms. We show that even in real analog systems with manufacturing imperfections (CV of 23.5% and 14.4% for dendritic branch gains and leaks respectively), this network is able to produce comparable results with fewer synaptic resources. The chip fabricated in [Formula: see text]m complementary metal oxide semiconductor has eight dendrites per cell and uses two opposing cells per class to cancel common-mode inputs. The chip can operate down to a [Formula: see text] V and dissipates 19 nW of static power per neuronal cell and [Formula: see text] 125 pJ/spike. For two-class classification problems of high-dimensional rate encoded binary patterns, the hardware achieves comparable performance as software implementation of the same with only about a 0.5% reduction in accuracy. On two UCI data sets, the IC integrated circuit has classification accuracy comparable to standard machine learners like support vector machines and extreme learning machines while using two to five times binary synapses. We also show that the system can operate on mean rate encoded spike patterns, as well as short bursts of spikes. To the best of our knowledge, this is the first attempt in hardware to perform classification exploiting dendritic properties and binary synapses.

  8. NASA/DOD Aerospace Knowledge Diffusion Research Project. Report 15: Technical uncertainty and project complexity as correlates of information use by US industry-affiliated aerospace engineers and scientists: Results of an exploratory investigation

    Science.gov (United States)

    Pinelli, Thomas E.; Glassman, Nanci A.; Affelder, Linda O.; Hecht, Laura M.; Kennedy, John M.; Barclay, Rebecca O.

    1993-01-01

    An exploratory study was conducted that investigated the influence of technical uncertainty and project complexity on information use by U.S. industry-affiliated aerospace engineers and scientists. The study utilized survey research in the form of a self-administered mail questionnaire. U.S. aerospace engineers and scientists on the Society of Automotive Engineers (SAE) mailing list served as the study population. The adjusted response rate was 67 percent. The survey instrument is appendix C to this report. Statistically significant relationships were found to exist between technical uncertainty, project complexity, and information use. Statistically significant relationships were found to exist between technical uncertainty, project complexity, and the use of federally funded aerospace R&D. The results of this investigation are relevant to researchers investigating information-seeking behavior of aerospace engineers. They are also relevant to R&D managers and policy planners concerned with transferring the results of federally funded aerospace R&D to the U.S. aerospace industry.

  9. How complex can integrated optical circuits become?

    NARCIS (Netherlands)

    Smit, M.K.; Hill, M.T.; Baets, R.G.F.; Bente, E.A.J.M.; Dorren, H.J.S.; Karouta, F.; Koenraad, P.M.; Koonen, A.M.J.; Leijtens, X.J.M.; Nötzel, R.; Oei, Y.S.; Waardt, de H.; Tol, van der J.J.G.M.; Khoe, G.D.

    2007-01-01

    The integration scale in Photonic Integrated Circuits will be pushed to VLSI-level in the coming decade. This will bring major changes in both application and manufacturing. In this paper developments in Photonic Integration are reviewed and the limits for reduction of device demensions are

  10. Recognition of VLSI Module Isomorphism

    Science.gov (United States)

    1990-03-01

    forthforth->next; 6.5 else{ prev4=prev4->next; forth=forth->next; if (header-. nenI ->tai==third){ header-.nevrI->tail=prev3; prev3->next=NULL; end...end=TRUE; if (header-. nenI ->head=third){ header-.newn->head=third->next; I if((third!=prev3)&&(finished!=TRUE)){ prev3->next=prev3->next->next; third

  11. VLSI Based Multiprocessor Communications Networks.

    Science.gov (United States)

    1982-09-01

    Networks". The contract began on September 1,1980 and was approved on scientific /technical grounds for a duration of three years. Incremental funding was...values for the individual delays will vary from comunicating modules (ij) are shown in Figure 4 module to module due to processing and fabrication

  12. Complex dynamics

    CERN Document Server

    Carleson, Lennart

    1993-01-01

    Complex dynamics is today very much a focus of interest. Though several fine expository articles were available, by P. Blanchard and by M. Yu. Lyubich in particular, until recently there was no single source where students could find the material with proofs. For anyone in our position, gathering and organizing the material required a great deal of work going through preprints and papers and in some cases even finding a proof. We hope that the results of our efforts will be of help to others who plan to learn about complex dynamics and perhaps even lecture. Meanwhile books in the field a. re beginning to appear. The Stony Brook course notes of J. Milnor were particularly welcome and useful. Still we hope that our special emphasis on the analytic side will satisfy a need. This book is a revised and expanded version of notes based on lectures of the first author at UCLA over several \\Vinter Quarters, particularly 1986 and 1990. We owe Chris Bishop a great deal of gratitude for supervising the production of cour...

  13. (II) complexes

    African Journals Online (AJOL)

    activities of Schiff base tin (II) complexes. Neelofar1 ... Conclusion: All synthesized Schiff bases and their Tin (II) complexes showed high antimicrobial and ...... Singh HL. Synthesis and characterization of tin (II) complexes of fluorinated Schiff bases derived from amino acids. Spectrochim Acta Part A: Molec Biomolec.

  14. Long-term outcome in patients treated with sirolimus-eluting stents in complex coronary artery lesions: 3-year results of the SCANDSTENT (Stenting Coronary Arteries in Non-Stress/Benestent Disease) trial

    DEFF Research Database (Denmark)

    Kelbaek, H.; Klovgaard, L.; Helqvist, S.

    2008-01-01

    data of the long-term outcome of patients with complex coronary artery lesions. METHODS: We randomly assigned 322 patients with total coronary occlusions or lesions located in bifurcations, ostial, or angulated segments of the coronary arteries to have SES or BMS implanted. RESULTS: At 3 years, major...... performed between 1 and 3 years after the index treatment (p = NS). According to revised definitions, stent thrombosis occurred in 5 patients (3.1%) in the SES group and in 7 patients (4.4%) in the BMS group (p = NS); very late stent thrombosis was observed in 4 versus 1 patient. CONCLUSIONS: A continued...

  15. Transition Complexity of Incomplete DFAs

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2010-08-01

    Full Text Available In this paper, we consider the transition complexity of regular languages based on the incomplete deterministic finite automata. A number of results on Boolean operations have been obtained. It is shown that the transition complexity results for union and complementation are very different from the state complexity results for the same operations. However, for intersection, the transition complexity result is similar to that of state complexity.

  16. Complexity Plots

    KAUST Repository

    Thiyagalingam, Jeyarajan

    2013-06-01

    In this paper, we present a novel visualization technique for assisting the observation and analysis of algorithmic complexity. In comparison with conventional line graphs, this new technique is not sensitive to the units of measurement, allowing multivariate data series of different physical qualities (e.g., time, space and energy) to be juxtaposed together conveniently and consistently. It supports multivariate visualization as well as uncertainty visualization. It enables users to focus on algorithm categorization by complexity classes, while reducing visual impact caused by constants and algorithmic components that are insignificant to complexity analysis. It provides an effective means for observing the algorithmic complexity of programs with a mixture of algorithms and black-box software through visualization. Through two case studies, we demonstrate the effectiveness of complexity plots in complexity analysis in research, education and application. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  17. Cosmic Complexity

    Science.gov (United States)

    Mather, John C.

    2012-01-01

    What explains the extraordinary complexity of the observed universe, on all scales from quarks to the accelerating universe? My favorite explanation (which I certainty did not invent) ls that the fundamental laws of physics produce natural instability, energy flows, and chaos. Some call the result the Life Force, some note that the Earth is a living system itself (Gaia, a "tough bitch" according to Margulis), and some conclude that the observed complexity requires a supernatural explanation (of which we have many). But my dad was a statistician (of dairy cows) and he told me about cells and genes and evolution and chance when I was very small. So a scientist must look for me explanation of how nature's laws and statistics brought us into conscious existence. And how is that seemll"!gly Improbable events are actually happening a!1 the time? Well, the physicists have countless examples of natural instability, in which energy is released to power change from simplicity to complexity. One of the most common to see is that cooling water vapor below the freezing point produces snowflakes, no two alike, and all complex and beautiful. We see it often so we are not amazed. But physlc!sts have observed so many kinds of these changes from one structure to another (we call them phase transitions) that the Nobel Prize in 1992 could be awarded for understanding the mathematics of their common features. Now for a few examples of how the laws of nature produce the instabilities that lead to our own existence. First, the Big Bang (what an insufficient name!) apparently came from an instability, in which the "false vacuum" eventually decayed into the ordinary vacuum we have today, plus the most fundamental particles we know, the quarks and leptons. So the universe as a whole started with an instability. Then, a great expansion and cooling happened, and the loose quarks, finding themselves unstable too, bound themselves together into today's less elementary particles like protons and

  18. A new molecular phylogeny of the Laurencia complex (Rhodophyta, Rhodomelaceae) and a review of key morphological characters result in a new genus, Coronaphycus, and a description of C. novus.

    Science.gov (United States)

    Metti, Yola; Millar, Alan J K; Steinberg, Peter

    2015-10-01

    Within the Laurencia complex (Rhodophyta, Rhodomelaceae), six genera have been recognized based on both molecular analyses and morphology: Laurencia, Osmundea, Chondrophycus, Palisada, Yuzurua, and Laurenciella. Recently, new material from Australia has been collected and included in the current molecular phylogeny, resulting in a new clade. This study examined the generic delineations using a combination of morphological comparisons and phylogenetic analysis of chloroplast (rbcL) nucleotide sequence. The molecular phylogeny recovered eight (rather than six) clades; Yuzurua, Laurenciella, Palisada, and Chondrophycus showed as monophyletic clades each with strong support. However, the genera Osmundea and Laurencia were polyphyletic. Consequently, the new genus Coronaphycus is proposed, resulting in the new combination Coronaphycus elatus and a description of the new species C. novus. © 2015 Phycological Society of America.

  19. Effectiveness of quantitative MAA SPECT/CT for the definition of vascularized hepatic volume and dosimetric approach: phantom validation and clinical preliminary results in patients with complex hepatic vascularization treated with yttrium-90-labeled microspheres.

    Science.gov (United States)

    Garin, Etienne; Lenoir, Laurence; Rolland, Yan; Laffont, Sophie; Pracht, Marc; Mesbah, Habiba; Porée, Philippe; Ardisson, Valérie; Bourguet, Patrick; Clement, Bruno; Boucher, Eveline

    2011-12-01

    The goal of this study was to assess the use of quantitative single-photon emission computed tomography/computed tomography (SPECT/CT) analysis for vascularized volume measurements in the use of the yttrium-90-radiolabeled microspheres (TheraSphere). A phantom study was conducted for the validation of SPECT/CT volume measurement. SPECT/CT quantitative analysis was used for the measurement of the volume of distribution of the albumin macroaggregates (MAA; i.e., the vascularized volume) in the liver and the tumor, and the total activity contained in the liver and the tumor in four consecutive patients presenting with a complex liver vascularization referred for a treatment with TheraSphere. SPECT/CT volume measurement proved to be accurate (mean error <7%) and reproducible (interobserver concordance 0.99). For eight treatments, in cases of complex hepatic vascularization, the hepatic volumes based on angiography and CT led to a relative overestimation or underestimation of the vascularized hepatic volume by 43.2 ± 32.7% (5-87%) compared with SPECT/CT analyses. The vascularized liver volume taken into account calculated from SPECT/CT data, instead of angiography and CT data, results in modifying the activity injected for three treatments of eight. Moreover, quantitative analysis of SPECT/CT allows us to calculate the absorbed dose in the tumor and in the healthy liver, leading to doubling of the injected activity for one treatment of eight. MAA SPECT/CT is accurate for volume measurements. It provides a valuable contribution to the therapeutic planning of patients presenting with complex hepatic vascularization, in particular for calculating the vascularized liver volume, the activity to be injected and the absorbed doses. Studies should be conducted to assess the role of quantitative MAA/SPECT CT in therapeutic planning.

  20. pH-sensitive polymeric cisplatin-ion complex with styrene-maleic acid copolymer exhibits tumor-selective drug delivery and antitumor activity as a result of the enhanced permeability and retention effect.

    Science.gov (United States)

    Saisyo, Atsuyuki; Nakamura, Hideaki; Fang, Jun; Tsukigawa, Kenji; Greish, Khaled; Furukawa, Hiroyuki; Maeda, Hiroshi

    2016-02-01

    Cisplatin (CDDP) is widely used to treat various cancers. However, its distribution to normal tissues causes serious adverse effects. For this study, we synthesized a complex of styrene-maleic acid copolymer (SMA) and CDDP (SMA-CDDP), which formed polymeric micelles, to achieve tumor-selective drug delivery based on the enhanced permeability and retention (EPR) effect. SMA-CDDP is obtained by regulating the pH of the reaction solution of SMA and CDDP. The mean SMA-CDDP particle size was 102.5 nm in PBS according to electrophoretic light scattering, and the CDDP content was 20.1% (w/w). The release rate of free CDDP derivatives from the SMA-CDDP complex at physiological pH was quite slow (0.75%/day), whereas it was much faster at pH 5.5 (4.4%/day). SMA-CDDP thus had weaker in vitro toxicity at pH 7.4 but higher cytotoxicity at pH 5.5. In vivo pharmacokinetic studies showed a 5-fold higher tumor concentration of SMA-CDDP than of free CDDP. SMA-CDDP had more effective antitumor potential but lower toxicity than did free CDDP in mice after i.v. administration. Administration of parental free CDDP at 4 mg/kg×3 caused a weight loss of more than 5%; SMA-CDDP at 60 mg/kg (CDDP equivalent)×3 caused no significant weight change but markedly suppressed S-180 tumor growth. These findings together suggested using micelles of the SMA-CDDP complex as a cancer chemotherapeutic agent because of beneficial properties-tumor-selective accumulation and relatively rapid drug release at the acidic pH of the tumor-which resulted in superior antitumor effects and fewer side effects compared with free CDDP. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Complexity Theory

    Science.gov (United States)

    Lee, William H K.

    2016-01-01

    A complex system consists of many interacting parts, generates new collective behavior through self organization, and adaptively evolves through time. Many theories have been developed to study complex systems, including chaos, fractals, cellular automata, self organization, stochastic processes, turbulence, and genetic algorithms.

  2. FY 2000 report on the results of the model project on facilities for the effective utilization of industrial waste from industrial complex. Separate Volume 3; 2000 nendo Kogyo danchi sangyo haikibutsu yuko riyo setsubi model jigyo. Dai 3 Bunsatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-03-01

    For the purpose of promoting the effective utilization of industrial waste as petroleum substitution energy resource and reducing the consumption of fossil fuels in Thailand, a model project on facilities for the effective utilization of industrial waste from industrial complex was worked on, and the FY 2000 results were reported. In Separate Volume 3, drawings of the following were included: furnace, free board spray nozzle, dispersion air nozzle, secondary burner, sand make-up conveyor, sand discharge gate, boiler, silencer for boiler safety valve, steam header, steam accumulator, gas cooling tower, refuse drainage storage tank, small sized drainage pump, pressure tank, flue gas duct, air damper, incombustible conveyor 2, sand circulation system bag filter, weighing bridge, fan starter panel, control panel, control panel, local switch box, distributed control system, field instrument, flue gas analyzer. (NEDO)

  3. Managing Complexity

    DEFF Research Database (Denmark)

    Maylath, Bruce; Vandepitte, Sonia; Minacori, Patricia

    2013-01-01

    and into French. The complexity of the undertaking proved to be a central element in the students' learning, as the collaboration closely resembles the complexity of international documentation workplaces of language service providers. © Association of Teachers of Technical Writing.......This article discusses the largest and most complex international learning-by-doing project to date- a project involving translation from Danish and Dutch into English and editing into American English alongside a project involving writing, usability testing, and translation from English into Dutch...

  4. Complex variables

    CERN Document Server

    Fisher, Stephen D

    1999-01-01

    The most important topics in the theory and application of complex variables receive a thorough, coherent treatment in this introductory text. Intended for undergraduates or graduate students in science, mathematics, and engineering, this volume features hundreds of solved examples, exercises, and applications designed to foster a complete understanding of complex variables as well as an appreciation of their mathematical beauty and elegance. Prerequisites are minimal; a three-semester course in calculus will suffice to prepare students for discussions of these topics: the complex plane, basic

  5. Cobalt(III) complex

    Indian Academy of Sciences (India)

    Administrator

    e, 40 µM complex, 10 hrs after dissolution, f, 40 µM complex, after irradiation dose 15 Gy. and H-atoms result in reduction of Co(III) to Co. (II). 6. It is interesting to see in complex containing multiple ligands what is the fate of electron adduct species formed by electron addition. Reduction to. Co(II) and intramolecular transfer ...

  6. Laboratory and test beam results from a large-area silicon drift detector

    CERN Document Server

    Bonvicini, V; Giubellino, P; Gregorio, A; Idzik, M; Kolojvari, A A; Montaño-Zetina, L M; Nouais, D; Petta, C; Rashevsky, A; Randazzo, N; Reito, S; Tosello, F; Vacchi, A; Vinogradov, L I; Zampa, N

    2000-01-01

    A very large-area (6.75*8 cm/sup 2/) silicon drift detector with integrated high-voltage divider has been designed, produced and fully characterised in the laboratory by means of ad hoc designed MOS injection electrodes. The detector is of the "butterfly" type, the sensitive area being subdivided into two regions with a maximum drift length of 3.3 cm. The device was also tested in a pion beam (at the CERN PS) tagged by means of a microstrip detector telescope. Bipolar VLSI front-end cells featuring a noise of 250 e/sup -/ RMS at 0 pF with a slope of 40 e/sup -//pF have been used to read out the signals. The detector showed an excellent stability and featured the expected characteristics. Some preliminary results will be presented. (12 refs).

  7. Softball Complex

    Science.gov (United States)

    Ellis, Jim

    1977-01-01

    The Parks and Recreation Department of Montgomery, Alabama, has developed a five-field softball complex as part of a growing community park with facilities for camping, golf, aquatics, tennis, and picnicking. (MJB)

  8. Complex Neutrosophic Subsemigroups and Ideals

    Directory of Open Access Journals (Sweden)

    Muhammad Gulistan

    2018-01-01

    Full Text Available In this article we study the idea of complex neutrosophic subsemigroups. We define the Cartesian product of complex neutrosophic subsemigroups, give some examples and study some of its related results. We also define complex neutrosophic (left, right, interior ideal in semigroup. Furthermore, we introduce the concept of characteristic function of complex neutrosophic sets, direct product of complex neutrosophic sets and study some results prove on its.

  9. Method of complex scaling

    International Nuclear Information System (INIS)

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  10. Blind pre-analysis of the main building complex WWER-1000 Kozloduy. Comparison of analytical and experimental results obtained by explosive testing (task 8a of workplan 96/97)

    International Nuclear Information System (INIS)

    Krutzik, N.J.

    1999-01-01

    In accordance with the 96/97 workplan of the Research Programme on 'Benchmark Studies for Seismic Analysis and Testing of WWER-Type Nuclear Power Plants', blind pre analyses were prepared for the main building complex of the WWER-1000 based on given excitations derived from explosive tests. The investigations were performed by several institutions based on various mathematical models and procedures for consideration of soil-structure interaction effects, but on the same explosive test input data recently obtained. The methods of calculation and software tools used will also be different. The aim of this investigation is to validate different idealization concepts (mathematical models for the idealization of the structures and the soil) as well as investigation procedures (time domain and frequency domain analysis) and finally the software tools by comparing structural response results (time histories and response spectra). This report contains the results of the blind pre analysis performed by Siemens using an equivalent beam model of the main building of the WWER 1000. The calculations were performed by means of a frequency domain calculation

  11. Subgroup complexes

    CERN Document Server

    Smith, Stephen D

    2011-01-01

    This book is intended as an overview of a research area that combines geometries for groups (such as Tits buildings and generalizations), topological aspects of simplicial complexes from p-subgroups of a group (in the spirit of Brown, Quillen, and Webb), and combinatorics of partially ordered sets. The material is intended to serve as an advanced graduate-level text and partly as a general reference on the research area. The treatment offers optional tracks for the reader interested in buildings, geometries for sporadic simple groups, and G-equivariant equivalences and homology for subgroup complexes.

  12. Complexity for Artificial Substrates (

    NARCIS (Netherlands)

    Loke, L.H.L.; Jachowski, N.R.; Bouma, T.J.; Ladle, R.J.; Todd, P.A.

    2014-01-01

    Physical habitat complexity regulates the structure and function of biological communities, although the mechanisms underlying this relationship remain unclear. Urbanisation, pollution, unsustainable resource exploitation and climate change have resulted in the widespread simplification (and loss)

  13. Complex Networks

    CERN Document Server

    Evsukoff, Alexandre; González, Marta

    2013-01-01

    In the last decade we have seen the emergence of a new inter-disciplinary field focusing on the understanding of networks which are dynamic, large, open, and have a structure sometimes called random-biased. The field of Complex Networks is helping us better understand many complex phenomena such as the spread of  deseases, protein interactions, social relationships, to name but a few. Studies in Complex Networks are gaining attention due to some major scientific breakthroughs proposed by network scientists helping us understand and model interactions contained in large datasets. In fact, if we could point to one event leading to the widespread use of complex network analysis is the availability of online databases. Theories of Random Graphs from Erdös and Rényi from the late 1950s led us to believe that most networks had random characteristics. The work on large online datasets told us otherwise. Starting with the work of Barabási and Albert as well as Watts and Strogatz in the late 1990s, we now know th...

  14. Electrospun complexes - functionalised nanofibres

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, T.; Wolf, M.; Dreyer, B.; Unruh, D.; Krüger, C.; Menze, M. [Leibniz University Hannover, Institute of Inorganic Chemistry (Germany); Sindelar, R. [University of Applied Science Hannover, Faculty II (Germany); Klingelhöfer, G. [Gutenberg-University, Institute of Inorganic and Analytic Chemistry (Germany); Renz, F., E-mail: renz@acd.uni-hannover.de [Leibniz University Hannover, Institute of Inorganic Chemistry (Germany)

    2016-12-15

    Here we present a new approach of using iron-complexes in electro-spun fibres. We modify poly(methyl methacrylate) (PMMA) by replacing the methoxy group with Diaminopropane or Ethylenediamine. The complex is bound covalently via an imine-bridge or an amide. The resulting polymer can be used in the electrospinning process without any further modifications in method either as pure reagent or mixed with small amounts of not functionalised polymer resulting in fibres of different qualities (Fig. 1).

  15. Complex variables

    CERN Document Server

    Flanigan, Francis J

    2010-01-01

    A caution to mathematics professors: Complex Variables does not follow conventional outlines of course material. One reviewer noting its originality wrote: ""A standard text is often preferred [to a superior text like this] because the professor knows the order of topics and the problems, and doesn't really have to pay attention to the text. He can go to class without preparation."" Not so here-Dr. Flanigan treats this most important field of contemporary mathematics in a most unusual way. While all the material for an advanced undergraduate or first-year graduate course is covered, discussion

  16. Shapes of interacting RNA complexes

    DEFF Research Database (Denmark)

    Fu, Benjamin Mingming; Reidys, Christian

    2014-01-01

    Shapes of interacting RNA complexes are studied using a filtration via their topological genus. A shape of an RNA complex is obtained by (iteratively) collapsing stacks and eliminating hairpin loops.This shape-projection preserves the topological core of the RNA complex and for fixed topological...... genus there are only finitely many such shapes. Our main result is a new bijection that relates the shapes of RNA complexes with shapes of RNA structures. This allows to compute the shape polynomial of RNA complexes via the shape polynomial of RNA structures. We furthermore present a linear time uniform...... sampling algorithm for shapes of RNA complexes of fixed topological genus....

  17. Cooperativity of complex salt bridges

    OpenAIRE

    Gvritishvili, Anzor G.; Gribenko, Alexey V.; Makhatadze, George I.

    2008-01-01

    The energetic contribution of complex salt bridges, in which one charged residue (anchor residue) forms salt bridges with two or more residues simultaneously, has been suggested to have importance for protein stability. Detailed analysis of the net energetics of complex salt bridge formation using double- and triple-mutant cycle analysis revealed conflicting results. In two cases, it was shown that complex salt bridge formation is cooperative, i.e., the net strength of the complex salt bridge...

  18. FY 1999 report on the results of the project on the industrial science technology R and D. R and D of complex carbonhydrate production/utilization technology (Development of CO2 fixation/effective utilization technology by applying complex carbonhydrate); 1999 nendo sangyo kagaku gijutsu kenkyu kaihatsu jigyo. Fukugo toushitsu seisan riyo gijutsu no kenkyu kaihatsu (Fukugo toushitsu oyo nisankatanso koteika yuko riyo gijutsu kaihatsu) seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    For the purpose of making an industrial use of complex carbonhydrate performing an important function in substance recognition, etc., which cannot be realized by only nucleic acid, protein and lipid, the R and D of the basic technology related to the production/utilization were conducted, and the FY 1999 results were reported. In the study of synthesis/utilization/remodeling technology of complex carbonhydrate using the chemical synthesis method, the R and D were made not only on the synthesis of sugar chain but for the synthesis method of the peptide added with sugar chain, that is, glycopeptide. Further, as to the reaction with the aim of the application, there was hope for large-quantity synthesis of milk oligosaccharide by the oxygen method. In the study of the design technology of complex carbonhydrate molecules, the systematical analysis using the model glycopeptide was conducted of effects of the sugar chain addition part and sugar chain structure on the 3D structure and physiological activity. Concerning the synthesis/utilization/remodeling technology of complex carbonhydrate using the biological method, study was made of the animal cell utilization, microorganism utilization, technology of structural analysis of complex carbonhydrate, etc. (NEDO)

  19. Resident and young physician experience with complex cataract surgery and new cataract and refractive technology: Results of the ASCRS 2016 Young Eye Surgeons survey.

    Science.gov (United States)

    Schallhorn, Julie M; Ciralsky, Jessica B; Yeu, Elizabeth

    2017-05-01

    A survey was offered to attendees of the 2016 annual meeting of the American Society of Cataract and Refractive Surgery (ASCRS) as well as online to ASCRS members. Of the 429 self-identified surgeons in training or those with fewer than 5 years in practice, 83% had performed complex cataract surgery using iris expansion devices or capsular tension rings (63%) and 70% had implanted a toric intraocular lens (IOL). A minority of respondents had performed laser-assisted cataract surgery (27%) or implanted presbyopia-correcting IOLs (39%), and only half (50%) had performed laser vision correction (LVC). Comfort with complex cataract and IOL procedures improved with increasing number of cases performed until greater than 10 cases. From this we can conclude that young surgeons have adequate exposure to complex cataracts but lack experience in refractive surgery and new IOL technology. Reported surgeon confidence improved with increased experience and exposure. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  20. Theories of computational complexity

    CERN Document Server

    Calude, C

    1988-01-01

    This volume presents four machine-independent theories of computational complexity, which have been chosen for their intrinsic importance and practical relevance. The book includes a wealth of results - classical, recent, and others which have not been published before.In developing the mathematics underlying the size, dynamic and structural complexity measures, various connections with mathematical logic, constructive topology, probability and programming theories are established. The facts are presented in detail. Extensive examples are provided, to help clarify notions and constructions. The lists of exercises and problems include routine exercises, interesting results, as well as some open problems.

  1. MANAGEMENT OF SPORT COMPLEXES

    Directory of Open Access Journals (Sweden)

    Marian STAN

    2015-07-01

    Full Text Available The actuality of the investigated theme. Nowadays, human evolution, including his intellectual development, proves the fact that especially the creation manpower and the employment was the solution of all life’s ambitions in society. So, the fact is that in reality, man is the most important capital of the society. Also, in an individual’s life, the practice of sport plays a significant role and that’s why the initiation, the launch and the management of sports complexes activity reveal the existence of specific management features that we will identify and explain in the current study. The aim of the research refers to the elaboration of a theoretical base of the management of the sport complexes, to the pointing of the factors that influence the efficient existence and function of a sport complex in our country and to the determination of the responsibilities that have a manager who directs successfully the activity of the sport complexes. The investigation is based on theoretical methods, such as: scientific documentation, analysis, synthesis, comparison and on empirical research methods, like: study of researched literature and observation. The results of the research indicate the fact that the profitability of a sport complex must assure a particular structure to avoid the bankruptcy risk and also, that the administration of the sport complexes activity must keep in view the reliable functions of the contemporaneous management.

  2. Organotin complexes with phosphines

    International Nuclear Information System (INIS)

    Passos, B. de F.T.; Jesus Filho, M.F. de; Filgueiras, C.A.L.; Abras, A.

    1988-01-01

    A series of organotin complexes was prepared involving phosphines bonded to the organotin moiety. The series include derivatives of SnCl x Ph 4-x (where x varied from zero to four with the phosphines Ph 3 P, (Ph 2 P)CH 2 , (Ph 2 P) 2 (CH 2 ) 2 , cis-(Ph 2 P)CH 2 , and CH 3 C(CH 2 PPh 2 ) 3 . A host of new complexes was obtained, showing different stoichiometries, bonding modes, and coordination numbers around the tin atom. These complexes were characterized by several different chemical and physical methods. The 119 Sn Moessbauer parameters varied differently. Whereas isomer shift values did not great variation for each group of complexs with the same organotin parent (SnCl x Ph 4-x ), reflecting a small change in s charge distribution on the Sn atom upon complexation, quadrupole splitting results varied widely, however, when the parent organotin compound was wholly symmetric (SnCl 4 and SnPPh 4 ), the complexes also tended to show quadrupole splitting values approaching zero. (author)

  3. Lukasiewicz-Moisil Many-Valued Logic Algebra of Highly-Complex Systems

    Directory of Open Access Journals (Sweden)

    James F. Glazebrook

    2010-06-01

    Full Text Available The fundamentals of Lukasiewicz-Moisil logic algebras and their applications to complex genetic network dynamics and highly complex systems are presented in the context of a categorical ontology theory of levels, Medical Bioinformatics and self-organizing, highly complex systems. Quantum Automata were defined in refs.[2] and [3] as generalized, probabilistic automata with quantum state spaces [1]. Their next-state functions operate through transitions between quantum states defined by the quantum equations of motions in the SchrÄodinger representation, with both initial and boundary conditions in space-time. A new theorem is proven which states that the category of quantum automata and automata-homomorphisms has both limits and colimits. Therefore, both categories of quantum automata and classical automata (sequential machines are bicomplete. A second new theorem establishes that the standard automata category is a subcategory of the quantum automata category. The quantum automata category has a faithful representation in the category of Generalized (M,R-Systems which are open, dynamic biosystem networks [4] with de¯ned biological relations that represent physiological functions of primordial(s, single cells and the simpler organisms. A new category of quantum computers is also defined in terms of reversible quantum automata with quantum state spaces represented by topological groupoids that admit a local characterization through unique, quantum Lie algebroids. On the other hand, the category of n-Lukasiewicz algebras has a subcategory of centered n-Lukasiewicz algebras (as proven in ref. [2] which can be employed to design and construct subcategories of quantum automata based on n-Lukasiewicz diagrams of existing VLSI. Furthermore, as shown in ref. [2] the category of centered n-Lukasiewicz algebras and the category of Boolean algebras are naturally equivalent. A `no-go' conjecture is also proposed which states that Generalized (M

  4. ERRAPRI Project: estimation of radiation risk to patients in interventional radiology, initial results and proposed levels of complexity; Proyecto ERRAPRI: estimacion del riesgo radiologico a los pacientes en radiologia intervencionista. Primeros resultados y propuestas de indices de complejidad

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz Cruces, R.; Vano, E.; Hernandez-Armas, J.; Carrera, F.; Diaz, F.; Gallego Beuther, J. F.; Ruiz Munoz-Canela, J. P.; Sanchez Casanueva, R.; Perez Martinez, M.; Fernandez Soto, J. M.; Munoz, V.; Moreno, F.; Moreno, C.; Martin-Palanca, A.

    2011-07-01

    The project ERRAPRI (2009 - 2012) will assess the most relevant aspects of the radiological risk associated with interventional radiology techniques (IR) guided by fluoroscopy in a sample of Spanish hospitals of three autonomous regions. Specific objectives include: assessing procedural protocols, especially the parameters related to radiation dose and diagnostic information obtained to establish balances cost (radiation risk) benefit to the procedures evaluated, and propose an index of complex procedures on several levels, based on the difficulty of making the same, assessing its relationship with the radiation dose values.

  5. Uranyl complexes of glutathione

    Energy Technology Data Exchange (ETDEWEB)

    Marzotto, A [Consiglio Nazionale delle Ricerche, Padua (Italy). Lab. di Chimica e Tecnologia dei Radioelementi

    1977-01-01

    Dioxouranium(VI) complexes of the tripeptide glutathione having different molar ratios were prepared and studied by IR, PMR, electronic absorption and circular dichroism spectra. The results indicate that coordination occurs at the carboxylato groups, acting as monodentate ligands, whereas no significant interaction with the amino and sulfhydrylic groups takes place.

  6. Complexity optimization and high-throughput low-latency hardware implementation of a multi-electrode spike-sorting algorithm.

    Science.gov (United States)

    Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix

    2015-03-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.

  7. Modeling Complex Systems

    CERN Document Server

    Boccara, Nino

    2010-01-01

    Modeling Complex Systems, 2nd Edition, explores the process of modeling complex systems, providing examples from such diverse fields as ecology, epidemiology, sociology, seismology, and economics. It illustrates how models of complex systems are built and provides indispensable mathematical tools for studying their dynamics. This vital introductory text is useful for advanced undergraduate students in various scientific disciplines, and serves as an important reference book for graduate students and young researchers. This enhanced second edition includes: . -recent research results and bibliographic references -extra footnotes which provide biographical information on cited scientists who have made significant contributions to the field -new and improved worked-out examples to aid a student’s comprehension of the content -exercises to challenge the reader and complement the material Nino Boccara is also the author of Essentials of Mathematica: With Applications to Mathematics and Physics (Springer, 2007).

  8. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  9. Complexity of Economical Systems

    Directory of Open Access Journals (Sweden)

    G. P. Pavlos

    2015-01-01

    Full Text Available In this study new theoretical concepts are described concerning the interpretation of economical complex dynamics. In addition a summary of an extended algorithm of nonlinear time series analysis is provided which is applied not only in economical time series but also in other physical complex systems (e.g. [22, 24]. In general, Economy is a vast and complicated set of arrangements and actions wherein agents—consumers, firms, banks, investors, government agencies—buy and sell, speculate, trade, oversee, bring products into being, offer services, invest in companies, strategize, explore, forecast, compete, learn, innovate, and adapt. As a result the economic and financial variables such as foreign exchange rates, gross domestic product, interest rates, production, stock market prices and unemployment exhibit large-amplitude and aperiodic fluctuations evident in complex systems. Thus, the Economics can be considered as spatially distributed non-equilibrium complex system, for which new theoretical concepts, such as Tsallis non extensive statistical mechanics and strange dynamics, percolation, nonGaussian, multifractal and multiscale dynamics related to fractional Langevin equations can be used for modeling and understanding of the economical complexity locally or globally.

  10. Complexity of formation in holography

    International Nuclear Information System (INIS)

    Chapman, Shira; Marrochio, Hugo; Myers, Robert C.

    2017-01-01

    It was recently conjectured that the quantum complexity of a holographic boundary state can be computed by evaluating the gravitational action on a bulk region known as the Wheeler-DeWitt patch. We apply this complexity=action duality to evaluate the ‘complexity of formation’ (DOI: 10.1103/PhysRevLett.116.191301; 10.1103/PhysRevD.93.086006), i.e. the additional complexity arising in preparing the entangled thermofield double state with two copies of the boundary CFT compared to preparing the individual vacuum states of the two copies. We find that for boundary dimensions d>2, the difference in the complexities grows linearly with the thermal entropy at high temperatures. For the special case d=2, the complexity of formation is a fixed constant, independent of the temperature. We compare these results to those found using the complexity=volume duality.

  11. Complexity of formation in holography

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, Shira [Perimeter Institute for Theoretical Physics,Waterloo, ON N2L 2Y5 (Canada); Marrochio, Hugo [Perimeter Institute for Theoretical Physics,Waterloo, ON N2L 2Y5 (Canada); Department of Physics & Astronomy and Guelph-Waterloo Physics Institute,University of Waterloo, Waterloo, ON N2L 3G1 (Canada); Myers, Robert C. [Perimeter Institute for Theoretical Physics,Waterloo, ON N2L 2Y5 (Canada)

    2017-01-16

    It was recently conjectured that the quantum complexity of a holographic boundary state can be computed by evaluating the gravitational action on a bulk region known as the Wheeler-DeWitt patch. We apply this complexity=action duality to evaluate the ‘complexity of formation’ (DOI: 10.1103/PhysRevLett.116.191301; 10.1103/PhysRevD.93.086006), i.e. the additional complexity arising in preparing the entangled thermofield double state with two copies of the boundary CFT compared to preparing the individual vacuum states of the two copies. We find that for boundary dimensions d>2, the difference in the complexities grows linearly with the thermal entropy at high temperatures. For the special case d=2, the complexity of formation is a fixed constant, independent of the temperature. We compare these results to those found using the complexity=volume duality.

  12. Hypoxia targeting copper complexes

    International Nuclear Information System (INIS)

    Dearling, J.L.

    1998-11-01

    selectivity increasing with decreasing reduction potential. A mechanism accounting for the observed results is suggested. A brief survey of the selectivities of other copper complexing ligands (dithiocarbamates, diphosphines and Schiff-bases) is presented, though neither normoxic nor hypoxic selectivity was found. In conclusion a structure-activity relationship exists within this series, and it is possible using these observations to design hypoxia selective copper complexes rationally. (author)

  13. Uranium metallogenesis of the peraluminous leucogranite from the Pontivy-Rostrenen magmatic complex (French Armorican Variscan belt): the result of long-term oxidized hydrothermal alteration during strike-slip deformation

    Science.gov (United States)

    Ballouard, C.; Poujol, M.; Mercadier, J.; Deloule, E.; Boulvais, P.; Baele, J. M.; Cuney, M.; Cathelineau, M.

    2018-06-01

    In the French Armorican Variscan belt, most of the economically significant hydrothermal U deposits are spatially associated with peraluminous leucogranites emplaced along the south Armorican shear zone (SASZ), a dextral lithospheric scale wrench fault that recorded ductile deformation from ca. 315 to 300 Ma. In the Pontivy-Rostrenen complex, a composite intrusion, the U mineralization is spatially associated with brittle structures related to deformation along the SASZ. In contrast to monzogranite and quartz monzodiorite (3 3), the leucogranite samples are characterized by highly variable U contents ( 3 to 27 ppm) and Th/U ratios ( 0.1 to 5) suggesting that the crystallization of magmatic uranium oxide in the more evolved facies was followed by uranium oxide leaching during hydrothermal alteration and/or surface weathering. U-Pb dating of uranium oxides from the deposits reveals that they mostly formed between ca. 300 and 270 Ma. In monzogranite and quartz monzodiorite, apatite grains display magmatic textures and provide U-Pb ages of ca. 315 Ma reflecting the time of emplacement of the intrusions. In contrast, apatite grains from the leucogranite display textural, geochemical, and geochronological evidences for interaction with U-rich oxidized hydrothermal fluids contemporaneously with U mineralizing events. From 300 to 270 Ma, infiltration of surface-derived oxidized fluids leached magmatic uranium oxide from fertile leucogranite and formed U deposits. This phenomenon was sustained by brittle deformation and by the persistence of thermal anomalies associated with U-rich granitic bodies.

  14. New results for the formation of a muoniated radical in the Mu + Br2 system: a van der Waals complex or evidence for vibrational bonding in Br-Mu-Br?

    Science.gov (United States)

    Fleming, Donald G; Cottrell, Stephen P; McKenzie, Iain; Macrae, Roderick M

    2012-08-21

    New evidence is presented for the observation of a muoniated radical in the Mu + Br(2) system, from μSR longitudinal field (LF) repolarisation studies in the gas phase, at Br(2) concentrations of 0.1 bar in a Br(2)/N(2) mixture at 300 K and at 10 bar total pressure. The LF repolarisation curve, up to a field of 4.5 kG, reveals two paramagnetic components, one for the Mu atom, formed promptly during the slowing-down process of the positive muon, with a known Mu hyperfine coupling constant (hfcc) of 4463 MHz, and one for a muoniated radical formed by fast Mu addition. From model fits to the Br(2)/N(2) data, the radical component is found to have an unusually high muon hfcc, assessed to be ∼3300 MHz with an overall error due to systematics expected to exceed 10%. This high muon hfcc is taken as evidence for the observation of either the Br-Mu-Br radical, and hence of vibrational bonding in this H[combining low line]-L[combining low line]-H[combining low line] system, or of a MuBr(2) van der Waals complex formed in the entrance channel. Preliminary ab initio electronic structure calculations suggest the latter is more likely but fully rigorous calculations of the effect of dynamics on the hfcc for either system have yet to be carried out.

  15. Real and complex analysis

    CERN Document Server

    Apelian, Christopher; Taft, Earl; Nashed, Zuhair

    2009-01-01

    The Spaces R, Rk, and CThe Real Numbers RThe Real Spaces RkThe Complex Numbers CPoint-Set Topology Bounded SetsClassification of Points Open and Closed SetsNested Intervals and the Bolzano-Weierstrass Theorem Compactness and Connectedness Limits and Convergence Definitions and First Properties Convergence Results for SequencesTopological Results for Sequences Properties of Infinite SeriesManipulations of Series in RFunctions: Definitions and Limits DefinitionsFunctions as MappingsSome Elementary Complex FunctionsLimits of FunctionsFunctions: Continuity and Convergence Continuity Uniform Continuity Sequences and Series of FunctionsThe DerivativeThe Derivative for f: D1 → RThe Derivative for f: Dk → RThe Derivative for f: Dk → RpThe Derivative for f: D → CThe Inverse and Implicit Function TheoremsReal IntegrationThe Integral of f: [a, b] → RProperties of the Riemann Integral Further Development of Integration TheoryVector-Valued and Line IntegralsComplex IntegrationIntroduction to Complex Integrals Fu...

  16. Complexity measures of music

    Science.gov (United States)

    Pease, April; Mahmoodi, Korosh; West, Bruce J.

    2018-03-01

    We present a technique to search for the presence of crucial events in music, based on the analysis of the music volume. Earlier work on this issue was based on the assumption that crucial events correspond to the change of music notes, with the interesting result that the complexity index of the crucial events is mu ~ 2, which is the same inverse power-law index of the dynamics of the brain. The search technique analyzes music volume and confirms the results of the earlier work, thereby contributing to the explanation as to why the brain is sensitive to music, through the phenomenon of complexity matching. Complexity matching has recently been interpreted as the transfer of multifractality from one complex network to another. For this reason we also examine the mulifractality of music, with the observation that the multifractal spectrum of a computer performance is significantly narrower than the multifractal spectrum of a human performance of the same musical score. We conjecture that although crucial events are demonstrably important for information transmission, they alone are not suficient to define musicality, which is more adequately measured by the multifractality spectrum.

  17. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  18. On Measuring the Complexity of Networks: Kolmogorov Complexity versus Entropy

    Directory of Open Access Journals (Sweden)

    Mikołaj Morzy

    2017-01-01

    Full Text Available One of the most popular methods of estimating the complexity of networks is to measure the entropy of network invariants, such as adjacency matrices or degree sequences. Unfortunately, entropy and all entropy-based information-theoretic measures have several vulnerabilities. These measures neither are independent of a particular representation of the network nor can capture the properties of the generative process, which produces the network. Instead, we advocate the use of the algorithmic entropy as the basis for complexity definition for networks. Algorithmic entropy (also known as Kolmogorov complexity or K-complexity for short evaluates the complexity of the description required for a lossless recreation of the network. This measure is not affected by a particular choice of network features and it does not depend on the method of network representation. We perform experiments on Shannon entropy and K-complexity for gradually evolving networks. The results of these experiments point to K-complexity as the more robust and reliable measure of network complexity. The original contribution of the paper includes the introduction of several new entropy-deceiving networks and the empirical comparison of entropy and K-complexity as fundamental quantities for constructing complexity measures for networks.

  19. COMPLEX TRAINING: A BRIEF REVIEW

    Directory of Open Access Journals (Sweden)

    William P. Ebben

    2002-06-01

    Full Text Available The effectiveness of plyometric training is well supported by research. Complex training has gained popularity as a training strategy combining weight training and plyometric training. Anecdotal reports recommend training in this fashion in order to improve muscular power and athletic performance. Recently, several studies have examined complex training. Despite the fact that questions remain about the potential effectiveness and implementation of this type of training, results of recent studies are useful in guiding practitioners in the development and implementation of complex training programs. In some cases, research suggests that complex training has an acute ergogenic effect on upper body power and the results of acute and chronic complex training include improved jumping performance. Improved performance may require three to four minutes rest between the weight training and plyometrics sets and the use of heavy weight training loads

  20. Complexity Metrics for Workflow Nets

    DEFF Research Database (Denmark)

    Lassen, Kristian Bisgaard; van der Aalst, Wil M.P.

    2009-01-01

    analysts have difficulties grasping the dynamics implied by a process model. Recent empirical studies show that people make numerous errors when modeling complex business processes, e.g., about 20 percent of the EPCs in the SAP reference model have design flaws resulting in potential deadlocks, livelocks......, etc. It seems obvious that the complexity of the model contributes to design errors and a lack of understanding. It is not easy to measure complexity, however. This paper presents three complexity metrics that have been implemented in the process analysis tool ProM. The metrics are defined...... for a subclass of Petri nets named Workflow nets, but the results can easily be applied to other languages. To demonstrate the applicability of these metrics, we have applied our approach and tool to 262 relatively complex Protos models made in the context of various student projects. This allows us to validate...

  1. The medial patellofemoral complex.

    Science.gov (United States)

    Loeb, Alexander E; Tanaka, Miho J

    2018-06-01

    The purpose of this review is to describe the current understanding of the medial patellofemoral complex, including recent anatomic advances, evaluation of indications for reconstruction with concomitant pathology, and surgical reconstruction techniques. Recent advances in our understanding of MPFC anatomy have found that there are fibers that insert onto the deep quadriceps tendon as well as the patella, thus earning the name "medial patellofemoral complex" to allow for the variability in its anatomy. In MPFC reconstruction, anatomic origin and insertion points and appropriate graft length are critical to prevent overconstraint of the patellofemoral joint. The MPFC is a crucial soft tissue checkrein to lateral patellar translation, and its repair or reconstruction results in good restoration of patellofemoral stability. As our understanding of MPFC anatomy evolves, further studies are needed to apply its relevance in kinematics and surgical applications to its role in maintaining patellar stability.

  2. VLSI implementation of an AMDF pitch detector

    OpenAIRE

    Smith, Tony; Gittel, Falko; Schwarzbacher, Andreas; Hilt, E.; Timoney, Joseph

    2003-01-01

    Pitch detectors are used in a variety of speech processing applications such as speech recognition systems where the pitch of the speaker is used as one parameter for identification purposes. Furthermore, pitch detectors are also sued with adaptive filters to achieve high quality adaptive noise cancellation of speech signals. In voice conversion systems, pitch detection is an essential step since the pitch of the modified signal is altered to model the target voice. This paper describes a ...

  3. Custom VLSI circuits for high energy physics

    International Nuclear Information System (INIS)

    Parker, S.

    1998-06-01

    This article provides a brief guide to integrated circuits, including their design, fabrication, testing, radiation hardness, and packaging. It was requested by the Panel on Instrumentation, Innovation, and Development of the International Committee for Future Accelerators, as one of a series of articles on instrumentation for future experiments. Their original request emphasized a description of available custom circuits and a set of recommendations for future developments. That has been done, but while traps that stop charge in solid-state devices are well known, those that stop physicists trying to develop the devices are not. Several years spent dodging the former and developing the latter made clear the need for a beginner's guide through the maze, and that is the main purpose of this text

  4. Learning in Neural Networks: VLSI Implementation Strategies

    Science.gov (United States)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  5. Systolic automata for VLSI on balanced trees

    Energy Technology Data Exchange (ETDEWEB)

    Culik, K Ii; Gruska, J; Salomaa, A

    1983-01-01

    Systolic tree automata with a binary (or, more generally, balanced) underlying tree are investigated. The main emphasis is on input conditions, decidability, and characterization of acceptable languages. 4 references.

  6. Custom VLSI circuits for high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Parker, S. [Univ. of Hawaii, Honolulu, HI (United States)

    1998-06-01

    This article provides a brief guide to integrated circuits, including their design, fabrication, testing, radiation hardness, and packaging. It was requested by the Panel on Instrumentation, Innovation, and Development of the International Committee for Future Accelerators, as one of a series of articles on instrumentation for future experiments. Their original request emphasized a description of available custom circuits and a set of recommendations for future developments. That has been done, but while traps that stop charge in solid-state devices are well known, those that stop physicists trying to develop the devices are not. Several years spent dodging the former and developing the latter made clear the need for a beginner`s guide through the maze, and that is the main purpose of this text.

  7. Area-Efficient Graph Layouts (for VLSI).

    Science.gov (United States)

    1980-08-13

    thle short side, then no rectangle is ew r generated x’.ho se aspect r~itho i s \\orse di ai aJ. ’I lie d i % ide-I mid -cimq tier clInt ruolIn in... Sutherland and Donald Oestrcichcr, "flow big should a printed circuit board be?," ILEEE, Transactions on Computers, Vol. C-22, May 1973, pp. 537-542. 22

  8. Complex Systems: An Introduction

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 14; Issue 9. Complex Systems: An Introduction - Anthropic Principle, Terrestrial Complexity, Complex Materials. V K Wadhawan. General Article Volume 14 Issue 9 September 2009 pp 894-906 ...

  9. Innovation in a complex environment

    Directory of Open Access Journals (Sweden)

    René Pellissier

    2012-11-01

    Objectives: The study objectives were, firstly, to establish the determinants for complexity and how these can be addressed from a design point of view in order to ensure innovation success and, secondly, to determine how this changes innovation forms and applications. Method: Two approaches were offered to deal with a complex environment – one allowing for complexity for organisational innovation and the other introducing reductionism to minimise complexity. These approaches were examined in a qualitative study involving case studies, open-ended interviews and content analysis between seven developing economy (South African organisations and seven developed economy (US organisations. Results: This study presented a proposed framework for (organisational innovation in a complex environment versus a framework that minimises complexity. The comparative organisational analysis demonstrated the importance of initiating organisational innovation to address internal and external complexity, with the focus being on the leadership actions, their selected operating models and resultant organisational innovations designs, rather than on technological innovations. Conclusion: This study cautioned the preference for technological innovation within organisations and suggested alternative innovation forms (such as organisational and management innovation be used to remain competitive in a complex environment.

  10. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  11. [Complex posttraumatic stress disorder].

    Science.gov (United States)

    Green, Tamar; Kotler, Moshe

    2007-11-01

    The characteristic symptoms resulting from exposure to an extreme trauma include three clusters of symptoms: persistent experience of the traumatic event, persistent avoidance of stimuli associated with the trauma and persistent symptoms of increased arousal. Beyond the accepted clusters of symptoms for posttraumatic stress disorder exists a formation of symptoms related to exposure to extreme or prolonged stress e.g. childhood abuse, physical violence, rape, and confinement within a concentration camp. With accumulated evidence of the existence of these symptoms began a trail to classify a more complex syndrome, which included, but was not confined to the symptoms of posttraumatic stress disorder. This review addresses several subjects for study in complex posttraumatic stress disorder, which is a complicated and controversial topic. Firstly, the concept of complex posttraumatic stress disorder is presented. Secondly, the professional literature relevant to this disturbance is reviewed and finally, the authors present the polemic being conducted between the researchers of posttraumatic disturbances regarding validity, reliability and the need for separate diagnosis for these symptoms.

  12. The Seis Lagos Carbonatite Complex

    International Nuclear Information System (INIS)

    Issler, R.S.; Silva, G.G. da.

    1980-01-01

    The Seis Lagos Carbonatite Complex located about 840 Km from Manaus, on the northwestern part of the Estado do Amazonas, Brazil is described. Geological reconnaissance mapping by Radam Project/DNPM, of the southwestern portion of the Guianes Craton, determined three circular features arranged in a north-south trend and outcroping as thick lateritic radioactive hills surrounded by gneisses and mignatites of the peneplained Guianense Complex. Results of core drilling samples analysis of the Seis Lagos Carbonatite Complex are compared with some igneous rocks and limestones of the world on the basis of abundance of their minor and trace elements. Log-log variation diagram of strontium and barium in carbonatite and limestone, exemplifield by South Africa and Angola carbonatites, are compared with the Seis Lagos Carbonatite Complex. The Seis Lagos Carbonatite Complex belongs to the siderite-soevite type. (E.G.) [pt

  13. 1998 results

    International Nuclear Information System (INIS)

    Gadonneix, P.

    1998-01-01

    This document presents the financial and commercial results of Gaz de France (GdF) company for 1998. The following points are presented successively: financial results (budget results, turnover, self-financing capacity, investments, debt situation), commercial results (some remarkable numbers and records, the tertiary and residential market, the industrial market, cogeneration and natural gas for vehicles), the strategy, 1998 realizations and perspectives (the natural gas energy in the 21. century, the development of GdF, the gas distribution and services (development of the French distribution system, export of the know-how, development of services), the transportation and storage systems threw Europe (densification of the pipeline network, the key-position of France, the north-south equilibrium of the distribution network), the natural gas production by GdF, the diversification of supplies, and the main daughter companies abroad). (J.S.)

  14. Innovation in a complex environment

    Directory of Open Access Journals (Sweden)

    René Pellissier

    2012-02-01

    Full Text Available Background: As our world becomes more global and competitive yet less predictable, the focus seems to be increasingly on looking to innovation activities to remain competitive. Although there is little doubt that a nation’s competitiveness is embedded in its innovativeness, the complex environment should not be ignored. Complexity is not accounted for in balance sheets or reported in reports; it becomes entrenched in every activity in the organisation. Innovation takes many forms and comes in different shapes.Objectives: The study objectives were, firstly, to establish the determinants for complexity and how these can be addressed from a design point of view in order to ensure innovation success and, secondly, to determine how this changes innovation forms and applications.Method: Two approaches were offered to deal with a complex environment – one allowing for complexity for organisational innovation and the other introducing reductionism to minimise complexity. These approaches were examined in a qualitative study involving case studies, open-ended interviews and content analysis between seven developing economy (South African organisations and seven developed economy (US organisations.Results: This study presented a proposed framework for (organisational innovation in a complex environment versus a framework that minimises complexity. The comparative organisational analysis demonstrated the importance of initiating organisational innovation to address internal and external complexity, with the focus being on the leadership actions, their selected operating models and resultant organisational innovations designs, rather than on technological innovations.Conclusion: This study cautioned the preference for technological innovation within organisations and suggested alternative innovation forms (such as organisational and management innovation be used to remain competitive in a complex environment. 

  15. Complex adaptive systems ecology

    DEFF Research Database (Denmark)

    Sommerlund, Julie

    2003-01-01

    In the following, I will analyze two articles called Complex Adaptive Systems EcologyI & II (Molin & Molin, 1997 & 2000). The CASE-articles are some of the more quirkyarticles that have come out of the Molecular Microbial Ecology Group - a groupwhere I am currently making observational studies....... They are the result of acooperation between Søren Molin, professor in the group, and his brother, JanMolin, professor at Department of Organization and Industrial Sociology atCopenhagen Business School. The cooperation arises from the recognition that bothmicrobial ecology and sociology/organization theory works...

  16. Complex Strategic Choices

    DEFF Research Database (Denmark)

    Leleur, Steen

    to strategic decision making, Complex Strategic Choices presents a methodology which is further illustrated by a number of case studies and example applications. Dr. Techn. Steen Leleur has adapted previously established research based on feedback and input from various conferences, journals and students...... resulting in new material stemming from and focusing on practical application of a systemic approach. The outcome is a coherent and flexible approach named systemic planning. The inclusion of both the theoretical and practical aspects of systemic planning makes this book a key resource for researchers...

  17. Complex performance in construction

    DEFF Research Database (Denmark)

    Bougrain, Frédéric; Forman, Marianne; Gottlieb, Stefan Christoffer

    To fulfil the expectations of demanding clients, new project-delivery mechanisms have been developed. Approaches focusing on performance-based building or new procurement processers such as new forms of private-public partnerships are considered as solutions improving the overall performance...... to the end users. This report summarises the results from work undertaken in the international collaborative project “Procuring and Operating Complex Products and Systems in Construction” (POCOPSC). POCOPSC was carried out in the period 2010-2014. The project was executed in collaboration between CSTB...

  18. 2012 Symposium on Chaos, Complexity and Leadership

    CERN Document Server

    Erçetin, Şefika

    2014-01-01

    These proceedings from the 2012 symposium on "Chaos, complexity and leadership"  reflect current research results from all branches of Chaos, Complex Systems and their applications in Management. Included are the diverse results in the fields of applied nonlinear methods, modeling of data and simulations, as well as theoretical achievements of Chaos and Complex Systems. Also highlighted are  Leadership and Management applications of Chaos and Complexity Theory.

  19. Mercury's complex exosphere: results from MESSENGER's third flyby.

    Science.gov (United States)

    Vervack, Ronald J; McClintock, William E; Killen, Rosemary M; Sprague, Ann L; Anderson, Brian J; Burger, Matthew H; Bradley, E Todd; Mouawad, Nelly; Solomon, Sean C; Izenberg, Noam R

    2010-08-06

    During MESSENGER's third flyby of Mercury, the Mercury Atmospheric and Surface Composition Spectrometer detected emission from ionized calcium concentrated 1 to 2 Mercury radii tailward of the planet. This measurement provides evidence for tailward magnetospheric convection of photoions produced inside the magnetosphere. Observations of neutral sodium, calcium, and magnesium above the planet's north and south poles reveal altitude distributions that are distinct for each species. A two-component sodium distribution and markedly different magnesium distributions above the two poles are direct indications that multiple processes control the distribution of even single species in Mercury's exosphere.

  20. Mercury's Complex Exosphere: Results from MESSENGER's Third Flyby

    Science.gov (United States)

    Vervack, Ronald J., Jr.; McClintock, William E.; Killen, Rosemary M.; Sprague, Ann L.; Anderson, Brian J.; Burger, Matthew H.; Bradley, E. Todd; Mouawad, Nelly; Solomon, Sean C.; Izenberg, Noam R.

    2010-01-01

    During MESSENGER's third flyby of Mercury, the Mercury Atmospheric and Surface Composition Spectrometer detected emission from ionized calcium concentrated 1 to 2 Mercury radii tailward of the planet. This measurement provides evidence for tailward magnetospheric convection of photoions produced inside the magnetosphere. Observations of neutral sodium, calcium, and magnesium above the planet's north and south poles reveal attitude distributions that are distinct for each species. A two-component sodium distribution and markedly different magnesium distributions above the two poles are direct indications that multiple processes control the distribution of even single species in Mercury's exosphere,

  1. Extraordinary results

    International Nuclear Information System (INIS)

    Cicova, V.

    2012-01-01

    For the first time in the history, Slovenske elektrarne became the first winner in a new category Business and Biodiversity in the competition of European companies aimed at the environment protection. Excellent results were achieved by a long-term co-operation with the Tatras National Park, in particular in saving the endangered animals.

  2. Ganil results

    International Nuclear Information System (INIS)

    Tamain, B.

    1992-06-01

    Recent Ganil results are presented: hot nuclei properties and multifragmentation, study of flow change around the inversion energy. Mesons and hard photons production are also briefly discussed. Correlations with studies that have been led in Saturne energy range, and the developments that can be foreseen in the future have been discussed

  3. Interdisciplinary conflict and organizational complexity.

    Science.gov (United States)

    Guy, M E

    1986-01-01

    Most people think that conflict among the professional staff is inevitable and results from each profession's unique set of values. Each profession then defends itself by claiming its own turf. This article demonstrates that organizational complexity, not professional territorialism, influences the amount of intraorganizational conflict. In a comparison of two psychiatric hospitals, this study shows that there is not necessarily greater conflict across professions than within professions. However, there is a significantly greater amount of conflict among staff at a structurally more complex hospital than at a less-complex hospital, regardless of profession. Implications for management are discussed.

  4. FY 1999 report on the results of the contract project 'The model project for facilities for effective utilization of industrial waste at the industrial complex in Thailand.' Separate Volume 4 - FY 1999 project; 1999 nendo seika hokokusho. Tai ni okeru kogyo danchi sangyo haikibutsu yuko riyo setsubi moderu jigyo - 4

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    For the purpose of reducing the consumption of fossil fuel by recycling industrial waste for effective use as petroleum substituting energy in Thailand where the amount of industrial waste is expected to increase, a model project on facilities for effective use of industrial waste at the industrial complex was carried out, and the FY 1999 results were reported. Concretely, the industrial waste generated from each plant at the industrial complex owned by IEAT is to be incinerated in fluidized bed incinerator, and the process steam is to be generated by recovering waste heat by waste heat recovery boiler and to be supplied to plants within the complex. In this fiscal year, the first year of the project, the attachment to the agreement was prepared in terms of the allotment of the project work between Japan and Thailand, various kinds of gist, schedules, etc. and signed. After that, the following were conducted at the Japan side according to the attachment to the agreement: determination of the basic specifications for facilities, basic design, detailed design, manufacture of a part of the equipment, etc. Separate Volume 4 included the results of the inspection of the tank, pump, blower, etc. (NEDO)

  5. FY 1999 report on the results of the contract project 'The model project for facilities for effective utilization of industrial waste at the industrial complex in Thailand.' Separate Volume 4 - FY 1999 project; 1999 nendo seika hokokusho. Tai ni okeru kogyo danchi sangyo haikibutsu yuko riyo setsubi moderu jigyo - 4

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    For the purpose of reducing the consumption of fossil fuel by recycling industrial waste for effective use as petroleum substituting energy in Thailand where the amount of industrial waste is expected to increase, a model project on facilities for effective use of industrial waste at the industrial complex was carried out, and the FY 1999 results were reported. Concretely, the industrial waste generated from each plant at the industrial complex owned by IEAT is to be incinerated in fluidized bed incinerator, and the process steam is to be generated by recovering waste heat by waste heat recovery boiler and to be supplied to plants within the complex. In this fiscal year, the first year of the project, the attachment to the agreement was prepared in terms of the allotment of the project work between Japan and Thailand, various kinds of gist, schedules, etc. and signed. After that, the following were conducted at the Japan side according to the attachment to the agreement: determination of the basic specifications for facilities, basic design, detailed design, manufacture of a part of the equipment, etc. Separate Volume 4 included the results of the inspection of the tank, pump, blower, etc. (NEDO)

  6. SAGE results

    International Nuclear Information System (INIS)

    Gavrin, V.N.

    1996-01-01

    The Russian-American Gallium solar neutrino Experiment (SAGE) is described. The solar neutrino flux measured by 31 extractions through October, 1993 is presented. The result of 69+/-10(stat)+5/-7(syst) SNU is to be compared with a Standard Solar Model prediction of 132 SNU. The initial results of a measurement of experimental efficiencies are also discussed by exposing the gallium target to neutrinos from an artificial source. The capture rate of neutrinos from this source is very close to that which is expected. The result can be expressed as a ratio of the measured capture rate to the anticipated rate from the source activity. This ratio is 0.93+0.15, -0.17 where the systematic and statistical errors have been combined. To first order the experimental efficiencies are in agreement with those determined during solar neutrino measurements and in previous auxiliary measurements. One must conclude that the discrepancy between the measured solar neutrino flux and that predicted by the solar models can not arise from an experimental artifact. (author)

  7. Symbolic Dynamics and Grammatical Complexity

    Science.gov (United States)

    Hao, Bai-Lin; Zheng, Wei-Mou

    The following sections are included: * Formal Languages and Their Complexity * Formal Language * Chomsky Hierarchy of Grammatical Complexity * The L-System * Regular Language and Finite Automaton * Finite Automaton * Regular Language * Stefan Matrix as Transfer Function for Automaton * Beyond Regular Languages * Feigenbaum and Generalized Feigenbaum Limiting Sets * Even and Odd Fibonacci Sequences * Odd Maximal Primitive Prefixes and Kneading Map * Even Maximal Primitive Prefixes and Distinct Excluded Blocks * Summary of Results

  8. Decentralized control of complex systems

    CERN Document Server

    Siljak, Dragoslav D

    2011-01-01

    Complex systems require fast control action in response to local input, and perturbations dictate the use of decentralized information and control structures. This much-cited reference book explores the approaches to synthesizing control laws under decentralized information structure constraints.Starting with a graph-theoretic framework for structural modeling of complex systems, the text presents results related to robust stabilization via decentralized state feedback. Subsequent chapters explore optimization, output feedback, the manipulative power of graphs, overlapping decompositions and t

  9. Determinants of Hospital Casemix Complexity

    Science.gov (United States)

    Becker, Edmund R.; Steinwald, Bruce

    1981-01-01

    Using the Commission on Professional and Hospital Activities' Resource Need Index as a measure of casemix complexity, this paper examines the relative contributions of teaching commitment and other hospital characteristics, hospital service and insurer distributions, and area characteristics to variations in casemix complexity. The empirical estimates indicate that all three types of independent variables have a substantial influence. These results are discussed in light of recent casemix research as well as current policy implications. PMID:6799430

  10. Complexity in language acquisition.

    Science.gov (United States)

    Clark, Alexander; Lappin, Shalom

    2013-01-01

    Learning theory has frequently been applied to language acquisition, but discussion has largely focused on information theoretic problems-in particular on the absence of direct negative evidence. Such arguments typically neglect the probabilistic nature of cognition and learning in general. We argue first that these arguments, and analyses based on them, suffer from a major flaw: they systematically conflate the hypothesis class and the learnable concept class. As a result, they do not allow one to draw significant conclusions about the learner. Second, we claim that the real problem for language learning is the computational complexity of constructing a hypothesis from input data. Studying this problem allows for a more direct approach to the object of study--the language acquisition device-rather than the learnable class of languages, which is epiphenomenal and possibly hard to characterize. The learnability results informed by complexity studies are much more insightful. They strongly suggest that target grammars need to be objective, in the sense that the primitive elements of these grammars are based on objectively definable properties of the language itself. These considerations support the view that language acquisition proceeds primarily through data-driven learning of some form. Copyright © 2013 Cognitive Science Society, Inc.

  11. Latest results of SEE measurements obtained by the STRURED demonstrator ASIC

    Energy Technology Data Exchange (ETDEWEB)

    Candelori, A. [INFN, Section of Padova, Via Marzolo 8, c.a.p. 35131, Padova (Italy); De Robertis, G. [INFN Section of Bari, Via Orabona 4, c.a.p. 70126, Bari (Italy); Gabrielli, A. [Physics Department, University of Bologna, Viale Berti Pichat 6/2, c.a.p. 40127, Bologna (Italy); Mattiazzo, S.; Pantano, D. [INFN, Section of Padova, Via Marzolo 8, c.a.p. 35131, Padova (Italy); Ranieri, A., E-mail: antonio.ranieri@ba.infn.i [INFN Section of Bari, Via Orabona 4, c.a.p. 70126, Bari (Italy); Tessaro, M. [INFN, Section of Padova, Via Marzolo 8, c.a.p. 35131, Padova (Italy)

    2011-01-21

    With the perspective to develop a radiation-tolerant circuit for High Energy Physics (HEP) applications, a test digital ASIC VLSI chip, called STRURED, has been designed and fabricated using a standard-cell library of commercial 130 nm CMOS technology by implementing three different radiation-tolerant architectures (Hamming, Triple Modular Redundancy and Triple Time Redundancy) in order to correct circuit malfunctions induced by the occurrence of Soft Errors (SEs). SEs are one of the main reasons of failures affecting electronic digital circuits operating in harsh radiation environments, such as in experiments performed at HEP colliders or in apparatus to be operated in space. In this paper we present and discuss the latest results of SE cross-section measurements performed using the STRURED digital device, exposed to high energy heavy ions at the SIRAD irradiation facility of the INFN National Laboratories of Legnaro (Padova, Italy). In particular the different behaviors of the input part and the core of the three radiation-tolerant architectures are analyzed in detail.

  12. 3D-FBK Pixel sensors: recent beam tests results with irradiated devices

    CERN Document Server

    Micelli, A; Sandaker, H; Stugu, B; Barbero, M; Hugging, F; Karagounis, M; Kostyukhin, V; Kruger, H; Tsung, J W; Wermes, N; Capua, M; Fazio, S; Mastroberardino, A; Susinno, G; Gallrapp, C; Di Girolamo, B; Dobos, D; La Rosa, A; Pernegger, H; Roe, S; Slavicek, T; Pospisil, S; Jakobs, K; Kohler, M; Parzefall, U; Darbo, G; Gariano, G; Gemme, C; Rovani, A; Ruscino, E; Butter, C; Bates, R; Oshea, V; Parker, S; Cavalli-Sforza, M; Grinstein, S; Korokolov, I; Pradilla, C; Einsweiler, K; Garcia-Sciveres, M; Borri, M; Da Via, C; Freestone, J; Kolya, S; Lai, C H; Nellist, C; Pater, J; Thompson, R; Watts, S J; Hoeferkamp, M; Seidel, S; Bolle, E; Gjersdal, H; Sjobaek, K N; Stapnes, S; Rohne, O; Su, D; Young, C; Hansson, P; Grenier, P; Hasi, J; Kenney, C; Kocian, M; Jackson, P; Silverstein, D; Davetak, H; DeWilde, B; Tsybychev, D; Dalla Betta, G F; Gabos, P; Povoli, M; Cobal, M; Giordani, M P; Selmi, L; Cristofoli, A; Esseni, D; Palestri, P; Fleta, C; Lozano, M; Pellegrini, G; Boscardin, M; Bagolini, A; Piemonte, C; Ronchin, S; Zorzi, N; Hansen, T E; Hansen, T; Kok, A; Lietaer, N; Kalliopuska, J; Oja, A

    2011-01-01

    The Pixel detector is the innermost part of the ATLAS experiment tracking device at the Large Hadron Collider (LHC), and plays a key role in the reconstruction of the primary and secondary vertices of short-lived particles. To cope with the high level of radiation produced during the collider operation, it is planned to add to the present three layers of silicon pixel sensors which constitute the Pixel Detector, an additional layer (Insertable B-Layer, or IBL) of sensors. 3D silicon sensors are one of the technologies which are under study for the IBL. 3D silicon technology is an innovative combination of very-large-scale integration (VLSI) and Micro-Electro-Mechanical-Systems (MEMS) where electrodes are fabricated inside the silicon bulk instead of being implanted on the wafer surfaces. 3D sensors, with electrodes fully or partially penetrating the silicon substrate, are currently fabricated at different processing facilities in Europe and USA. This paper reports on the 2010 June beam test results for irradi...

  13. Complex differential geometry

    CERN Document Server

    Zheng, Fangyang

    2002-01-01

    The theory of complex manifolds overlaps with several branches of mathematics, including differential geometry, algebraic geometry, several complex variables, global analysis, topology, algebraic number theory, and mathematical physics. Complex manifolds provide a rich class of geometric objects, for example the (common) zero locus of any generic set of complex polynomials is always a complex manifold. Yet complex manifolds behave differently than generic smooth manifolds; they are more coherent and fragile. The rich yet restrictive character of complex manifolds makes them a special and interesting object of study. This book is a self-contained graduate textbook that discusses the differential geometric aspects of complex manifolds. The first part contains standard materials from general topology, differentiable manifolds, and basic Riemannian geometry. The second part discusses complex manifolds and analytic varieties, sheaves and holomorphic vector bundles, and gives a brief account of the surface classifi...

  14. Recent results on howard's algorithm

    DEFF Research Database (Denmark)

    Miltersen, P.B.

    2012-01-01

    is generally recognized as fast in practice, until recently, its worst case time complexity was poorly understood. However, a surge of results since 2009 has led us to a much more satisfactory understanding of the worst case time complexity of the algorithm in the various settings in which it applies...

  15. Complex and symplectic geometry

    CERN Document Server

    Medori, Costantino; Tomassini, Adriano

    2017-01-01

    This book arises from the INdAM Meeting "Complex and Symplectic Geometry", which was held in Cortona in June 2016. Several leading specialists, including young researchers, in the field of complex and symplectic geometry, present the state of the art of their research on topics such as the cohomology of complex manifolds; analytic techniques in Kähler and non-Kähler geometry; almost-complex and symplectic structures; special structures on complex manifolds; and deformations of complex objects. The work is intended for researchers in these areas.

  16. Mutagenicity of complex mixtures

    International Nuclear Information System (INIS)

    Pelroy, R.A.

    1985-01-01

    The effect of coal-derived complex chemical mixtures on the mutagenicity of 6-aminochrysene (6-AC) was determined with Salmonella typhimurium TA98. Previous results suggested that the mutagenic potency of 6-AC for TA98 in the standard microsomal activation (Ames) assay increased if it was presented to the cells mixed with high-boiling coal liquids (CL) from the solvent refined coal (SRC) process. In this year's work, the apparent mutational synergism of CL and 6-AC was independently verified in a fluctuation bioassay which allowed quantitation of mutational frequencies and cell viability. The results of this assay system were similar to those in the Ames assay. Moreover, the fluctation assay revealed that mutagenesis and cellular toxicity induced by 6-AC were both strongly enhanced if 6-AC was presented to the cells mixed in a high-boiling CL. 4 figures

  17. OF AGROINDUSTRIAL COMPLEX MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Ruslan E. Mansurov

    2017-06-01

    Full Text Available The relevance of this work is determined, on the one hand, by tightening of the foreign political situation and its possible negative impact on the food security of the country, and, on the other hand, by the crisis of the domestic agricultural sector. These factors demand the development of new approaches to regional agroindustrial complex (AIC management. The aim is to develop a methodology for assessing the level of food self-sufficiency in main food areas of the Volgograd region. The author used the results of the statistical materials of AIC of the Volgograd region for 2016. The analytical methods included mathematical analysis and comparison. The main results are as follows. Based on the analysis of the current situation to ensure food security of Russia it was proved that at the present time it is necessary to develop effective indicators showing the level of self-sufficiency in basic food regions. It was also revealed that at the moment this indicator in the system of regional agrarian and industrial complex is not controlled. As a result of generalization of existing approaches the author’s method of rating the level of self-sufficiency of regions was offered. Its testing was carried out in several districts of the Volgograd region. The proposed authoring method of rating estimation of self-sufficiency in basic foodstuffs can be used in the regional agroindustrial complex management system at the federal and local levels. It can be used to rank areas in terms of their self-sufficiency in basic foodstuffs. This allows us to focus on the development of backward areas of agro-food and make appropriate management decisions. The final rating value - 0.759 obtained by the results of analysis of the situation in the Volgograd region means that the situation in matters of selfsufficiency in basic foodstuffs in general is good. However, we should aim at the maximum possible value of the rating - 1. In the application of the proposed

  18. FY 1999 report on the results of the contract project 'The model project for facilities for effective utilization of industrial waste at the industrial complex in Thailand.' Separate Volume 5 - FY 1999 project; 1999 nendo seika hokokusho. Tai ni okeru kogyo danchi sangyo haikibutsu yuko riyo setsubi moderu jigyo - 5

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    For the purpose of reducing the consumption of fossil fuel by recycling industrial waste for effective use as petroleum substituting energy in Thailand where the amount of industrial waste is expected to increase, a model project on facilities for effective use of industrial waste at the industrial complex was carried out, and the FY 1999 results were reported. Concretely, the industrial waste generated from each plant at the industrial complex owned by IEAT is to be incinerated in fluidized bed incinerator, and the process steam is to be generated by recovering waste heat by waste heat recovery boiler and to be supplied to plants within the complex. In this fiscal year, the first year of the project, the attachment to the agreement was prepared in terms of the allotment of the project work between Japan and Thailand, various kinds of gist, schedules, etc. and signed. After that, the following were conducted at the Japan side according to the attachment to the agreement: determination of the basic specifications for facilities, basic design, detailed design, manufacture of a part of the equipment, etc. Separate Volume 5 included the drawing of the basic design, drawing of building/design, drawing of manufacturing equipment, etc. (NEDO)

  19. FY 1999 report on the results of the contract project 'The model project for facilities for effective utilization of industrial waste at the industrial complex in Thailand.' Separate Volume 6 - FY 1999 project; 1999 nendo seika hokokusho. Tai ni okeru kogyo danchi sangyo haikibutsu yuko riyo setsubi moderu jigyo - 6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    For the purpose of reducing the consumption of fossil fuel by recycling industrial waste for effective use as petroleum substituting energy in Thailand where the amount of industrial waste is expected to increase, a model project on facilities for effective use of industrial waste at the industrial complex was carried out, and the FY 1999 results were reported. Concretely, the industrial waste generated from each plant at the industrial complex owned by IEAT is to be incinerated in fluidized bed incinerator, and the process steam is to be generated by recovering waste heat by waste heat recovery boiler and to be supplied to plants within the complex. In this fiscal year, the first year of the project, the attachment to the agreement was prepared in terms of the allotment of the project work between Japan and Thailand, various kinds of gist, schedules, etc. and signed. After that, the following were conducted at the Japan side according to the attachment to the agreement: determination of the basic specifications for facilities, basic design, detailed design, manufacture of a part of the equipment, etc. Separate Volume 6 included drawings of assembling of the equipment such as crane, crusher and valve. (NEDO)

  20. FY 1999 report on the results of the contract project 'The model project for facilities for effective utilization of industrial waste at the industrial complex in Thailand.' Separate Volume 6 - FY 1999 project; 1999 nendo seika hokokusho. Tai ni okeru kogyo danchi sangyo haikibutsu yuko riyo setsubi moderu jigyo - 6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    For the purpose of reducing the consumption of fossil fuel by recycling industrial waste for effective use as petroleum substituting energy in Thailand where the amount of industrial waste is expected to increase, a model project on facilities for effective use of industrial waste at the industrial complex was carried out, and the FY 1999 results were reported. Concretely, the industrial waste generated from each plant at the industrial complex owned by IEAT is to be incinerated in fluidized bed incinerator, and the process steam is to be generated by recovering waste heat by waste heat recovery boiler and to be supplied to plants within the complex. In this fiscal year, the first year of the project, the attachment to the agreement was prepared in terms of the allotment of the project work between Japan and Thailand, various kinds of gist, schedules, etc. and signed. After that, the following were conducted at the Japan side according to the attachment to the agreement: determination of the basic specifications for facilities, basic design, detailed design, manufacture of a part of the equipment, etc. Separate Volume 6 included drawings of assembling of the equipment such as crane, crusher and valve. (NEDO)

  1. Oligocyclopentadienyl transition metal complexes

    Energy Technology Data Exchange (ETDEWEB)

    de Azevedo, Cristina G.; Vollhardt, K. Peter C.

    2002-01-18

    Synthesis, characterization, and reactivity studies of oligocyclopentadienyl transition metal complexes, namely those of fulvalene, tercyclopentadienyl, quatercyclopentadienyl, and pentacyclopentadienyl(cyclopentadienyl) are the subject of this account. Thermal-, photo-, and redox chemistries of homo- and heteropolynuclear complexes are described.

  2. Photocytotoxic lanthanide complexes

    Indian Academy of Sciences (India)

    Among many applications of lanthanides, gadolinium complexes are used as magnetic resonance imaging (MRI) contrast agents in clinical radiology and luminescent lanthanides for bioanalysis, imaging and sensing. The chemistry of photoactive lanthanide complexes showing biological applications is of recent origin.

  3. Neurosurgical implications of Carney complex.

    Science.gov (United States)

    Watson, J C; Stratakis, C A; Bryant-Greenwood, P K; Koch, C A; Kirschner, L S; Nguyen, T; Carney, J A; Oldfield, E H

    2000-03-01

    The authors present their neurosurgical experience with Carney complex. Carney complex, characterized by spotty skin pigmentation, cardiac myxomas, primary pigmented nodular adrenocortical disease, pituitary tumors, and nerve sheath tumors (NSTs), is a recently described, rare, autosomal-dominant familial syndrome that is relatively unknown to neurosurgeons. Neurosurgery is required to treat pituitary adenomas and a rare NST, the psammomatous melanotic schwannoma (PMS), in patients with Carney complex. Cushing's syndrome, a common component of the complex, is caused by primary pigmented nodular adrenocortical disease and is not secondary to an adrenocorticotropic hormone-secreting pituitary adenoma. The authors reviewed 14 cases of Carney complex, five from the literature and nine from their own experience. Of the 14 pituitary adenomas recognized in association with Carney complex, 12 developed growth hormone (GH) hypersecretion (producing gigantism in two patients and acromegaly in 10), and results of immunohistochemical studies in one of the other two were positive for GH. The association of PMSs with Carney complex was established in 1990. Of the reported tumors, 28% were associated with spinal nerve sheaths. The spinal tumors occurred in adults (mean age 32 years, range 18-49 years) who presented with pain and radiculopathy. These NSTs may be malignant (10%) and, as with the cardiac myxomas, are associated with significant rates of morbidity and mortality. Because of the surgical comorbidity associated with cardiac myxoma and/or Cushing's syndrome, recognition of Carney complex has important implications for perisurgical patient management and family screening. Study of the genetics of Carney complex and of the biological abnormalities associated with the tumors may provide insight into the general pathobiological abnormalities associated with the tumors may provide insight into the general pathobiological features of pituitary adenomas and NSTs.

  4. Complexity for survival of livings

    International Nuclear Information System (INIS)

    Zak, Michail

    2007-01-01

    A connection between survivability of livings and complexity of their behavior is established. New physical paradigms-exchange of information via reflections, and chain of abstractions-explaining and describing progressive evolution of complexity in living (active) systems are introduced. A biological origin of these paradigms is associated with a recently discovered mirror neuron that is able to learn by imitation. As a result, an active element possesses the self-nonself images and interacts with them creating the world of mental dynamics. Three fundamental types of complexity of mental dynamics that contribute to survivability are identified. Mathematical model of the corresponding active systems is described by coupled motor-mental dynamics represented by Langevin and Fokker-Planck equations, respectively, while the progressive evolution of complexity is provided by nonlinear evolution of probability density. Application of the proposed formalism to modeling common-sense-based decision-making process is discussed

  5. Complexity for survival of livings

    Energy Technology Data Exchange (ETDEWEB)

    Zak, Michail [Jet Propulsion Laboratory, California Institute of Technology, Advance Computing Algorithms and IVHM Group, Pasadena, CA 91109 (United States)]. E-mail: Michail.Zak@jpl.nasa.gov

    2007-05-15

    A connection between survivability of livings and complexity of their behavior is established. New physical paradigms-exchange of information via reflections, and chain of abstractions-explaining and describing progressive evolution of complexity in living (active) systems are introduced. A biological origin of these paradigms is associated with a recently discovered mirror neuron that is able to learn by imitation. As a result, an active element possesses the self-nonself images and interacts with them creating the world of mental dynamics. Three fundamental types of complexity of mental dynamics that contribute to survivability are identified. Mathematical model of the corresponding active systems is described by coupled motor-mental dynamics represented by Langevin and Fokker-Planck equations, respectively, while the progressive evolution of complexity is provided by nonlinear evolution of probability density. Application of the proposed formalism to modeling common-sense-based decision-making process is discussed.

  6. ComplexRec 2017

    DEFF Research Database (Denmark)

    a single step in the user's more complex background need. These background needs can often place a variety of constraints on which recommendations are interesting to the user and when they are appropriate. However, relatively little research has been done on these complex recommendation scenarios....... The ComplexRec 2017 workshop addressed this by providing an interactive venue for discussing approaches to recommendation in complex scenarios that have no simple one-size-fits-all-solution....

  7. Organization of complex networks

    Science.gov (United States)

    Kitsak, Maksim

    Many large complex systems can be successfully analyzed using the language of graphs and networks. Interactions between the objects in a network are treated as links connecting nodes. This approach to understanding the structure of networks is an important step toward understanding the way corresponding complex systems function. Using the tools of statistical physics, we analyze the structure of networks as they are found in complex systems such as the Internet, the World Wide Web, and numerous industrial and social networks. In the first chapter we apply the concept of self-similarity to the study of transport properties in complex networks. Self-similar or fractal networks, unlike non-fractal networks, exhibit similarity on a range of scales. We find that these fractal networks have transport properties that differ from those of non-fractal networks. In non-fractal networks, transport flows primarily through the hubs. In fractal networks, the self-similar structure requires any transport to also flow through nodes that have only a few connections. We also study, in models and in real networks, the crossover from fractal to non-fractal networks that occurs when a small number of random interactions are added by means of scaling techniques. In the second chapter we use k-core techniques to study dynamic processes in networks. The k-core of a network is the network's largest component that, within itself, exhibits all nodes with at least k connections. We use this k-core analysis to estimate the relative leadership positions of firms in the Life Science (LS) and Information and Communication Technology (ICT) sectors of industry. We study the differences in the k-core structure between the LS and the ICT sectors. We find that the lead segment (highest k-core) of the LS sector, unlike that of the ICT sector, is remarkably stable over time: once a particular firm enters the lead segment, it is likely to remain there for many years. In the third chapter we study how

  8. Complex Correspondence Principle

    International Nuclear Information System (INIS)

    Bender, Carl M.; Meisinger, Peter N.; Hook, Daniel W.; Wang Qinghai

    2010-01-01

    Quantum mechanics and classical mechanics are distinctly different theories, but the correspondence principle states that quantum particles behave classically in the limit of high quantum number. In recent years much research has been done on extending both quantum and classical mechanics into the complex domain. These complex extensions continue to exhibit a correspondence, and this correspondence becomes more pronounced in the complex domain. The association between complex quantum mechanics and complex classical mechanics is subtle and demonstrating this relationship requires the use of asymptotics beyond all orders.

  9. Uranium thiolate complexes

    International Nuclear Information System (INIS)

    Leverd, Pascal C.

    1994-01-01

    This research thesis proposes a new approach to the chemistry of uranium thiolate complexes as these compounds are very promising for various uses (in bio-inorganic chemistry, in some industrial processes like oil desulphurization). It more particularly addresses the U-S bond or more generally bonds between polarizable materials and hard metals. The author thus reports the study of uranium organometallic thiolates (tricyclo-penta-dienic and mono-cyclo-octa-tetraenylic complexes), and of uranium homoleptic thiolates (tetra-thiolate complexes, hexa-thiolate complexes, reactivity of homoleptic thiolate complexes) [fr

  10. Complex Algebraic Varieties

    CERN Document Server

    Peternell, Thomas; Schneider, Michael; Schreyer, Frank-Olaf

    1992-01-01

    The Bayreuth meeting on "Complex Algebraic Varieties" focussed on the classification of algebraic varieties and topics such as vector bundles, Hodge theory and hermitian differential geometry. Most of the articles in this volume are closely related to talks given at the conference: all are original, fully refereed research articles. CONTENTS: A. Beauville: Annulation du H(1) pour les fibres en droites plats.- M. Beltrametti, A.J. Sommese, J.A. Wisniewski: Results on varieties with many lines and their applications to adjunction theory.- G. Bohnhorst, H. Spindler: The stability of certain vector bundles on P(n) .- F. Catanese, F. Tovena: Vector bundles, linear systems and extensions of (1).- O. Debarre: Vers uns stratification de l'espace des modules des varietes abeliennes principalement polarisees.- J.P. Demailly: Singular hermitian metrics on positive line bundles.- T. Fujita: On adjoint bundles of ample vector bundles.- Y. Kawamata: Moderate degenerations of algebraic surfaces.- U. Persson: Genus two fibra...

  11. Complexity in Evolutionary Processes

    International Nuclear Information System (INIS)

    Schuster, P.

    2010-01-01

    Darwin's principle of evolution by natural selection is readily casted into a mathematical formalism. Molecular biology revealed the mechanism of mutation and provides the basis for a kinetic theory of evolution that models correct reproduction and mutation as parallel chemical reaction channels. A result of the kinetic theory is the existence of a phase transition in evolution occurring at a critical mutation rate, which represents a localization threshold for the population in sequence space. Occurrence and nature of such phase transitions depend critically on fitness landscapes. The fitness landscape being tantamount to a mapping from sequence or genotype space into phenotype space is identified as the true source of complexity in evolution. Modeling evolution as a stochastic process is discussed and neutrality with respect to selection is shown to provide a major challenge for understanding evolutionary processes (author)

  12. Turbulence in complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Mann, Jakob [Risoe National Lab., Wind Energy and Atmosheric Physics Dept., Roskilde (Denmark)

    1999-03-01

    The purpose of this work is to develop a model of the spectral velocity-tensor in neutral flow over complex terrain. The resulting equations are implemented in a computer code using the mean flow generated by a linear mean flow model as input. It estimates turbulence structure over hills (except on the lee side if recirculation is present) in the so-called outer layer and also models the changes in turbulence statistics in the vicinity roughness changes. The generated turbulence fields are suitable as input for dynamic load calculations on wind turbines and other tall structures and is under implementation in the collection of programs called WA{sup s}P Engineering. (au) EFP-97; EU-JOULE-3. 15 refs.

  13. Evolution of complex dynamics

    Science.gov (United States)

    Wilds, Roy; Kauffman, Stuart A.; Glass, Leon

    2008-09-01

    We study the evolution of complex dynamics in a model of a genetic regulatory network. The fitness is associated with the topological entropy in a class of piecewise linear equations, and the mutations are associated with changes in the logical structure of the network. We compare hill climbing evolution, in which only mutations that increase the fitness are allowed, with neutral evolution, in which mutations that leave the fitness unchanged are allowed. The simple structure of the fitness landscape enables us to estimate analytically the rates of hill climbing and neutral evolution. In this model, allowing neutral mutations accelerates the rate of evolutionary advancement for low mutation frequencies. These results are applicable to evolution in natural and technological systems.

  14. Compact Interconnection Networks Based on Quantum Dots

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarress, Katayoon; Spotnitz, Matthew

    2003-01-01

    Architectures that would exploit the distinct characteristics of quantum-dot cellular automata (QCA) have been proposed for digital communication networks that connect advanced digital computing circuits. In comparison with networks of wires in conventional very-large-scale integrated (VLSI) circuitry, the networks according to the proposed architectures would be more compact. The proposed architectures would make it possible to implement complex interconnection schemes that are required for some advanced parallel-computing algorithms and that are difficult (and in many cases impractical) to implement in VLSI circuitry. The difficulty of implementation in VLSI and the major potential advantage afforded by QCA were described previously in Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 42. To recapitulate: Wherever two wires in a conventional VLSI circuit cross each other and are required not to be in electrical contact with each other, there must be a layer of electrical insulation between them. This, in turn, makes it necessary to resort to a noncoplanar and possibly a multilayer design, which can be complex, expensive, and even impractical. As a result, much of the cost of designing VLSI circuits is associated with minimization of data routing and assignment of layers to minimize crossing of wires. Heretofore, these considerations have impeded the development of VLSI circuitry to implement complex, advanced interconnection schemes. On the other hand, with suitable design and under suitable operating conditions, QCA-based signal paths can be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes. The proposed architectures require two advances in QCA-based circuitry beyond basic QCA-based binary

  15. Trauma to the nail complex

    Directory of Open Access Journals (Sweden)

    Jefferson Braga Silva

    2014-04-01

    Full Text Available OBJECTIVE: to analyze the results from surgical intervention to treat trauma of the nail complex.METHODS: we retrospectively reviewed a series of 94 consecutive patients with trauma of the nail complex who were treated between 2000 and 2009. In 42 patients, nail bed suturing was performed. In 27 patients, nail bed suturing was performed subsequent to osteosynthesis of the distal phalanx. In 15, immediate grafting was performed, and in 10, late-stage grafting of the nail bed. The growth, size and shape of the nail were evaluated in comparison with the contralateral finger. The results were obtained by summing scores and classifying them as good, fair or poor.RESULTS: the results were considered to be good particularly in the patients who underwent nail bed suturing or nail bed suturing with osteosynthesis of the distal phalanx. Patients who underwent immediate or late-stage nail grafting had poor results.CONCLUSION: trauma of the nail complex without loss of substance presented better results than did deferred treatment for reconstruction of the nail complex.

  16. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  17. Analysis of nanoparticle biomolecule complexes.

    Science.gov (United States)

    Gunnarsson, Stefán B; Bernfur, Katja; Mikkelsen, Anders; Cedervall, Tommy

    2018-03-01

    Nanoparticles exposed to biological fluids adsorb biomolecules on their surface forming a biomolecular corona. This corona determines, on a molecular level, the interactions and impact the newly formed complex has on cells and organisms. The corona formation as well as the physiological and toxicological relevance are commonly investigated. However, an acknowledged but rarely addressed problem in many fields of nanobiotechnology is aggregation and broadened size distribution of nanoparticles following their interactions with the molecules of biological fluids. In blood serum, TiO 2 nanoparticles form complexes with a size distribution from 30 nm to more than 500 nm. In this study we have separated these complexes, with good resolution, using preparative centrifugation in a sucrose gradient. Two main apparent size populations were obtained, a fast sedimenting population of complexes that formed a pellet in the preparative centrifugation tube, and a slow sedimenting complex population still suspended in the gradient after centrifugation. Concentration and surface area dependent differences are found in the biomolecular corona between the slow and fast sedimenting fractions. There are more immunoglobulins, lipid binding proteins, and lipid-rich complexes at higher serum concentrations. Sedimentation rate and the biomolecular corona are important factors for evaluating any experiment including nanoparticle exposure. Our results show that traditional description of nanoparticles in biological fluids is an oversimplification and that more thorough characterisations are needed.

  18. On power consumption issues in FIR filters with application to communication receivers: complexity, word length, and switching activity

    Energy Technology Data Exchange (ETDEWEB)

    Havashki, Asghar

    2009-10-15

    Power consumption in CMOS VLSI circuits has in recent years become a major design constraint. This is in particular important for wireless networks, due to the limited life time of the batteries that wireless nodes are operating on. Orthogonal Frequency Division Multiplexing (OFDM) is one example of a technique which in recent years has become widely applied in wireless communication systems. However, the performance of OFDM and other spectrally efficient schemes depends, to a large extend, on advanced digital signal processing (DSP) and on the use of efficient and possibly adaptive resource allocation and transmission techniques. These in turn require that accurate estimates of the channel are available in the receiver and transmitter. However, accurate channel estimation of a time and frequency dispersive wireless fading channel calls for complex estimators, which might lead to significant power dissipation in such devices. Therefore, characterizing and analyzing power consumed by such devices under different channel conditions, and optimizing for power is important to reduce the overall power consumption of the system. In this thesis a certain chosen class of estimators, i.e., a linear FIR estimator, is considered, which is based on finite impulse response (FIR) filters. The work in this thesis considers the power related challenges in such estimators. The power consumed by such estimators depends, in part, on the complexity of the estimator, i.e., the length of the FIR filter. The filter length is one of the factors affecting the estimation accuracy. An analysis of the relation between the performance of such estimators and the required complexity for these devices under different channel conditions, i.e., in the presence of noise, is performed in this thesis. In this study we show that a small increase in this noise can lead to a considerable increase in the required estimator complexity if a given Normalized Mean Square Error (NMSE) performance for the

  19. Research on image complexity evaluation method based on color information

    Science.gov (United States)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  20. Simplicial complexes of graphs

    CERN Document Server

    Jonsson, Jakob

    2008-01-01

    A graph complex is a finite family of graphs closed under deletion of edges. Graph complexes show up naturally in many different areas of mathematics, including commutative algebra, geometry, and knot theory. Identifying each graph with its edge set, one may view a graph complex as a simplicial complex and hence interpret it as a geometric object. This volume examines topological properties of graph complexes, focusing on homotopy type and homology. Many of the proofs are based on Robin Forman's discrete version of Morse theory. As a byproduct, this volume also provides a loosely defined toolbox for attacking problems in topological combinatorics via discrete Morse theory. In terms of simplicity and power, arguably the most efficient tool is Forman's divide and conquer approach via decision trees; it is successfully applied to a large number of graph and digraph complexes.

  1. On Complex Random Variables

    Directory of Open Access Journals (Sweden)

    Anwer Khurshid

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE In this paper, it is shown that a complex multivariate random variable  is a complex multivariate normal random variable of dimensionality if and only if all nondegenerate complex linear combinations of  have a complex univariate normal distribution. The characteristic function of  has been derived, and simpler forms of some theorems have been given using this characterization theorem without assuming that the variance-covariance matrix of the vector  is Hermitian positive definite. Marginal distributions of  have been given. In addition, a complex multivariate t-distribution has been defined and the density derived. A characterization of the complex multivariate t-distribution is given. A few possible uses of this distribution have been suggested.

  2. Complex wounds Feridas complexas

    Directory of Open Access Journals (Sweden)

    Marcus Castro Ferreira

    2006-01-01

    Full Text Available Complex wound is the term used more recently to group those well-known difficult wounds, either chronic or acute, that challenge medical and nursing teams. They defy cure using conventional and simple "dressings" therapy and currently have a major socioeconomic impact. The purpose of this review is to bring these wounds to the attention of the health-care community, suggesting that they should be treated by multidisciplinary teams in specialized hospital centers. In most cases, surgical treatment is unavoidable, because the extent of skin and subcutaneous tissue loss requires reconstruction with grafts and flaps. New technologies, such as the negative pressure device, should be introduced. A brief review is provided of the major groups of complex wounds-diabetic wounds, pressure sores, chronic venous ulcers, post-infection soft-tissue gangrenes, and ulcers resulting from vasculitis.Ferida complexa é uma nova definição para identificar aquelas feridas crônicas e algumas agudas já bem conhecidas e que desafiam equipes médicas e de enfermagem. São difíceis de serem resolvidas usando tratamentos convencionais e simples curativos. Têm atualmente grande impacto sócio-econômico. Esta revisão procura atrair atenção da comunidade de profissionais de saúde para estas feridas, sugerindo que devam ser tratadas por equipe multidisciplinar em centro hospitalar especializado. Na maioria dos casos o tratamento cirúrgico deve ser indicado, uma vez que a perda de pele e tecido subcutâneo é extensa, necessitando de reconstrução com enxertos e retalhos. Nova tecnologia, como uso da terapia por pressão negativa foi introduzido. Breves comentários sobre os principais grupos de feridas complexas: pé diabético, úlceras por pressão, úlceras venosas, síndrome de Fournier e vasculites.

  3. Complex sleep apnea syndrome

    Directory of Open Access Journals (Sweden)

    Wang J

    2013-07-01

    Full Text Available Juan Wang,1,* Yan Wang,1,* Jing Feng,1,2 Bao-yuan Chen,1 Jie Cao1 1Respiratory Department of Tianjin Medical University General Hospital, Tianjin, People's Republic of China; 2Division of Pulmonary and Critical Care Medicine, Duke University Medical Center, Durham, NC, USA *The first two authors contributed equally to this work Abstract: Complex sleep apnea syndrome (CompSAS is a distinct form of sleep-disordered breathing characterized as central sleep apnea (CSA, and presents in obstructive sleep apnea (OSA patients during initial treatment with a continuous positive airway pressure (CPAP device. The mechanisms of why CompSAS occurs are not well understood, though we have a high loop gain theory that may help to explain it. It is still controversial regarding the prevalence and the clinical significance of CompSAS. Patients with CompSAS have clinical features similar to OSA, but they do exhibit breathing patterns like CSA. In most CompSAS cases, CSA events during initial CPAP titration are transient and they may disappear after continued CPAP use for 4–8 weeks or even longer. However, the poor initial experience of CompSAS patients with CPAP may not be avoided, and nonadherence with continued therapy may often result. Treatment options like adaptive servo-ventilation are available now that may rapidly resolve the disorder and relieve the symptoms of this disease with the potential of increasing early adherence to therapy. But these approaches are associated with more expensive and complicated devices. In this review, the definition, potential plausible mechanisms, clinical characteristics, and treatment approaches of CompSAS will be summarized. Keywords: complex sleep apnea syndrome, obstructive sleep apnea, central sleep apnea, apnea threshold, continuous positive airway pressure, adaptive servo-ventilation

  4. Complex centers of polynomial differential equations

    Directory of Open Access Journals (Sweden)

    Mohamad Ali M. Alwash

    2007-07-01

    Full Text Available We present some results on the existence and nonexistence of centers for polynomial first order ordinary differential equations with complex coefficients. In particular, we show that binomial differential equations without linear terms do not have complex centers. Classes of polynomial differential equations, with more than two terms, are presented that do not have complex centers. We also study the relation between complex centers and the Pugh problem. An algorithm is described to solve the Pugh problem for equations without complex centers. The method of proof involves phase plane analysis of the polar equations and a local study of periodic solutions.

  5. Complex Systems and Dependability

    CERN Document Server

    Zamojski, Wojciech; Sugier, Jaroslaw

    2012-01-01

    Typical contemporary complex system is a multifaceted amalgamation of technical, information, organization, software and human (users, administrators and management) resources. Complexity of such a system comes not only from its involved technical and organizational structure but mainly from complexity of information processes that must be implemented in the operational environment (data processing, monitoring, management, etc.). In such case traditional methods of reliability analysis focused mainly on technical level are usually insufficient in performance evaluation and more innovative meth

  6. Lanthanide complexes with pivaloylacetone

    International Nuclear Information System (INIS)

    Eliseeva, S.V.; Chugarov, N.V.; Kuz'mina, N.P.; Martynenko, L.I.; Nichiporuk, R.V.; Ivanov, S.A.

    2003-01-01

    Complexes Ln(pa) 3 ·2H 2 O (Ln=La, Gd, Lu, Hpa - pivaloylacetone) are synthesized and investigated by the methods of element, IR spectroscopic and thermal analyses. Behaviour of the complexes during heating in vacuum is compared with such one for acetylacetonates and dipivaloylmethanates. Structure of the complexes in solution is studied by 1 H NMR and MALDI-MS [ru

  7. Brain Tumour Scintigraphy with {sup 99m}Tc-Pertechnetate, {sup 99m}Tc-Fe(II) Complex and {sup 131}I-Labelled Macroaggregated Albumin - Comparison of Results; La Scintigraphie des Tumeurs Cerebrales a l'Aide de Pertechnetate Marque au {sup 99m}Tc, de Complexe {sup 99m}Tc-Fe-II et de Macroagregats d'Albumine Marques au {sup 131}I. Comparaison des Resultats

    Energy Technology Data Exchange (ETDEWEB)

    Haas, J. P.; Dietz, H.; Schmidt, K. J.; Doerr, F.; Brod, K. H.; Wolf, R. [Institut de Radiologie Clinique et Clinique de Neurochirurgie, Universite de Mayence, Federal Republic of Germany (Germany)

    1969-05-15

    {sup 99m}Tc, in the form of pertechnetate, has now gained its place in brain scintigraphy. The present authors and many others have found its diagnostic value equal to that of substances labelled with mercury or iodine. Using {sup 99m}Tc-pertechnetate as the basis of their work, the authors attempted to extend isotopic diagnosis by two methods, namely: intra-arterial introduction of macroaggregated albumin; and intravenous injection of a new substance, the {sup 99m}Tc-Fe(II) complex. Following their publication in 1966 of the results obtained with the first method, the authors now present an account of their experiments carried out on over 100 patients with brain tumours. Compared with scintigraphy involving intravenous injection of {sup 99m}Tc-pertechnetate, this new method gives supplementary information which is often of considerable help in clinical neuroradiological diagnosis of these patients. The findings in a majority of cases were confirmed by operation or by autopsy. The method made it possible for the first time to use scintigraphy to visualize the arterial network of the brain. In view of the ease with which brain scintigraphy can be performed after intravenous injection, and also of our experience with the {sup 99m}Tc-Fe(II) complex in kidney scintigraphy and the evidence of a brief report in the United States literature, the authors decided to try out this substance in the detection of brain tumours. Their first results showed it to be incontestably superior to all others previously applied intravenously. It has the indubitable advantages of {sup 99m}Tc as regards the irradiation dose to the patient and gives an exceptionally precise delineation of the invasive cerebral processes due to a rapid fall in blood concentration. A combination of these two new scintigraphic methods would thus seem to open up new possibilities and greatly enhance the field of application of radioisotopes in the clinical study of brain diseases. The paper contains material

  8. Phospholyl-uranium complexes

    International Nuclear Information System (INIS)

    Gradoz, Philippe

    1993-01-01

    After having reported a bibliographical study on penta-methylcyclopentadienyl uranium complexes, and a description of the synthesis and radioactivity of uranium (III) and (IV) boron hydrides compounds, this research thesis reports the study of mono and bis-tetramethyl-phospholyl uranium complexes comprising chloride, boron hydride, alkyl and alkoxide ligands. The third part reports the comparison of structures, stabilities and reactions of homologue complexes in penta-methylcyclopentadienyl and tetramethyl-phospholyl series. The last part addresses the synthesis of tris-phospholyl uranium (III) and (IV) complexes. [fr

  9. Nuclear weapons complex

    International Nuclear Information System (INIS)

    Rezendes, V.S.

    1991-03-01

    In this book, GAO characterizes DOE's January 1991 Nuclear Weapons Complex Reconfiguration Study as a starting point for reaching agreement on solutions to many of the complex's safety and environmental problems. Key decisions still need to be made about the size of the complex, where to relocate plutonium operations, what technologies to use for new tritium production, and what to do with excess plutonium. The total cost for reconfiguring and modernizing the complex is still uncertain, and some management issues remain unresolved. Congress faces a difficult task in making test decisions given the conflicting demands for scarce resources in a time of growing budget deficits and war in the Persian Gulf

  10. Conducting metal dithiolate complexes

    DEFF Research Database (Denmark)

    Underhill, A. E.; Ahmad, M. M.; Turner, D. J.

    1985-01-01

    Further work on the chemical composition of the one-dimensional metallic metal dithiolene complex Li-Pt(mnt) is reported. The electrical conduction and thermopower properties of the nickel and palladium complexes are reported and compared with those of the platinum compound......Further work on the chemical composition of the one-dimensional metallic metal dithiolene complex Li-Pt(mnt) is reported. The electrical conduction and thermopower properties of the nickel and palladium complexes are reported and compared with those of the platinum compound...

  11. Complexity, Accountability, and School Improvement.

    Science.gov (United States)

    O'Day, Jennifer A.

    2002-01-01

    Using complexity theory, examines standards-based accountability focused on improving school organization. Compares Chicago Public Schools' outcomes-based bureaucratic accountability approach with Baltimore City Schools' combined administrator-professional accountability. Concludes that the combined approach should result in more lasting change.…

  12. On badly approximable complex numbers

    DEFF Research Database (Denmark)

    Esdahl-Schou, Rune; Kristensen, S.

    We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...

  13. FY 1997 report on the results of the industrial technology R and D project. Development of technology to use biological resources such as the complex biological system (Development of biological use petroleum substitution fuel production technology); 1997 nendo fukugo seibutsukei nado seibutsu shigen riyo gijutsu kaihatsu seika hokokusho. Seibutsu riyo sekiyu daitai nenryo seizo gijutsu no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    Experimental researches were conducted and the FY 1997 results were reported with the aim of establishing analytical technology for the complex biological system by which the complex biological system can be analyzed in such a state as it is using the molecular biological method. In the study of the molecular genetic analytical technology, PCR primers used for amplification of topoisomerase II genes of the whole eukaryote was designed. As to the histochemical analytical technology, a study was made on the new constitution microorganism detection method by the hybridization method and the antibody specific dyeing method, and the following were conducted: manifestation in quantity of colibacillus and the recovery, refining, and construction of peptide library by fuzzy display method. Concerning the functional analytical technology, technological researches were made such as the environmental adaptation mechanism of high thermophile and the information transfer mechanism among bacteria through cell membranes for elucidation of the special environment detection/response mechanism and the special environment adaptation/resistance mechanism. As to the separation/culture technology, various anaerobic microorganisms were separated from marine sponge for the development of a method of culturing in 3D matrices. (NEDO)

  14. Complexity, Metastability and Nonextensivity

    Science.gov (United States)

    Beck, C.; Benedek, G.; Rapisarda, A.; Tsallis, C.

    the glass transition in thermal systems / A. Coniglio ... [et al.] -- Supersymmetry and metastability in disordered systems / I. Giardinu, A. Cavagna and G. Parisi -- The metastable liquid-liquid phase transition: from water to colloids and liquid metals / G. Franzese and H. E. Stanley -- Optimization by thermal cycling / A. Möbius, K. H. Hoffmann and C. Schön -- Ultra-thin magnetic films and the structural glass transition: a modelling analogy / S. A. Cannas ... [et al.] -- Non-extensivity of inhomogeneous magnetic systems / M. S. Reis ... [et al.] -- Multifractal analysis of turbulence and granular flow / T. Arimitsu and N. Arimitsu -- Application of superstatistics to atmospheric turbulence / S. Rizzo and A. Rapisarda -- Complexity of perceptual processes / F. T. Arecchi -- Energetic model of tumor growth / P. Castorina and D. Zappalá -- Active Brownian motion - stochastic dynamics of swarms / W. Ebeling and U. Erdmann -- Complexity in the collective behaviour of humans / T. Vicsek -- Monte Carlo simulations of opinion dynamics / S. Fortunato -- A Merton-like approach to pricing debt based on a non-Gaussian asset model / L. Borland, J. Evnine and B. Pochart -- The subtle nature of market efficiency / J.-P. Bouchaud -- Correlation based hierarchical clustering in financial time series / S. Miccichè, F. Lillo and R. N. Mantegna -- Path integrals and exotic options: methods and numerical results / G. Bormetti ... [et al.] -- Aging of event-event correlation of earthquake aftershocks / S. Abe and N. Suzuki -- Aging in earthquakes model / U. Tirnakli -- The Olami-Feder-Christensen model on a small-world topology / F. Caruso ... [et al.] -- Networks as Renormalized models for emergent behavior in physical systems / M. Paczuski -- Energy landscapes, scale-free networks and Apollonian packings / J. P. K. Doye and C. P. Massen -- Epidemic modeling and complex realities / M. Barthélemy ... [et al.] -- The importance of being central / P. Crucitti and V. Latora.

  15. Characterization of complex renal cysts

    DEFF Research Database (Denmark)

    Graumann, Ole; Osther, Susanne Sloth; Osther, Palle Jörn Sloth

    2010-01-01

    Abstract Objective. Complex renal cysts represent a major clinical problem, since it is often difficult to exclude malignancy. The Bosniak classification system, based on computed tomography (CT), is widely used to categorize cystic renal lesions. The aim of this study was to evaluate critically...... available data on the Bosniak classification. Material and methods. All publications from an Entrez Pubmed search were reviewed, focusing on clinical applicability and the use of imaging modalities other than CT to categorize complex renal cysts. Results. Fifteen retrospective studies were found. Most...

  16. Thermodynamic properties of actinide complexes

    International Nuclear Information System (INIS)

    Bismondo, A.; Cassol, A.; Di Bernardo, P.; Magon, L.; Tomat, G.; Consiglio Nazionale delle Ricerche, Padua

    1981-01-01

    In a previous paper the stability constants and the enthalpies of formation of uranyl(VI)-malonate complexes in 1 M Na(ClO 4 ) and at 25.0 0 C, have been reported. In order to assess the influence of the number of methylenic groups in the ligand chain on the stability constants of the formed complexes and on their ΔH and ΔS values, a series of potentiometric and calorimetric investigations were carried out on the uranyl(VI)-succinate system. The results are given and discussed. (author)

  17. Border detection in complex networks

    International Nuclear Information System (INIS)

    Travencolo, Bruno A N; Viana, Matheus Palhares; Costa, Luciano da Fontoura

    2009-01-01

    One important issue implied by the finite nature of real-world networks regards the identification of their more external (border) and internal nodes. The present work proposes a formal and objective definition of these properties, founded on the recently introduced concept of node diversity. It is shown that this feature does not exhibit any relevant correlation with several well-established complex networks measurements. A methodology for the identification of the borders of complex networks is described and illustrated with respect to theoretical (geographical and knitted networks) as well as real-world networks (urban and word association networks), yielding interesting results and insights in both cases.

  18. Complexity hints for economic policy

    CERN Document Server

    Salzano, Massimo

    2007-01-01

    This volume extends the complexity approach to economics. This complexity approach is not a completely new way of doing economics, and that it is a replacement for existing economics, but rather the integration of some new analytic and computational techniques into economists’ bag of tools. It provides some alternative pattern generators, which can supplement existing approaches by providing an alternative way of finding patterns than be obtained by the traditional scientific approach. On this new kind of policy hints can be obtained. The reason why the complexity approach is taking hold now in economics is because the computing technology has advanced. This advance allows consideration of analytical systems that could not previously be considered by economists. Consideration of these systems suggested that the results of the "control-based" models might not extend easily to more complicated systems, and that we now have a method—piggybacking computer assisted analysis onto analytic methods—to start gen...

  19. Complex proofs of real theorems

    CERN Document Server

    Lax, Peter D

    2011-01-01

    Complex Proofs of Real Theorems is an extended meditation on Hadamard's famous dictum, "The shortest and best way between two truths of the real domain often passes through the imaginary one." Directed at an audience acquainted with analysis at the first year graduate level, it aims at illustrating how complex variables can be used to provide quick and efficient proofs of a wide variety of important results in such areas of analysis as approximation theory, operator theory, harmonic analysis, and complex dynamics. Topics discussed include weighted approximation on the line, Müntz's theorem, Toeplitz operators, Beurling's theorem on the invariant spaces of the shift operator, prediction theory, the Riesz convexity theorem, the Paley-Wiener theorem, the Titchmarsh convolution theorem, the Gleason-Kahane-Żelazko theorem, and the Fatou-Julia-Baker theorem. The discussion begins with the world's shortest proof of the fundamental theorem of algebra and concludes with Newman's almost effortless proof of the prime ...

  20. Visual Complexity: A Review

    Science.gov (United States)

    Donderi, Don C.

    2006-01-01

    The idea of visual complexity, the history of its measurement, and its implications for behavior are reviewed, starting with structuralism and Gestalt psychology at the beginning of the 20th century and ending with visual complexity theory, perceptual learning theory, and neural circuit theory at the beginning of the 21st. Evidence is drawn from…