WorldWideScience

Sample records for scale integration vlsi

  1. VLSI design

    CERN Document Server

    Einspruch, Norman G

    1986-01-01

    VLSI Electronics Microstructure Science, Volume 14: VLSI Design presents a comprehensive exposition and assessment of the developments and trends in VLSI (Very Large Scale Integration) electronics. This volume covers topics that range from microscopic aspects of materials behavior and device performance to the comprehension of VLSI in systems applications. Each article is prepared by a recognized authority. The subjects discussed in this book include VLSI processor design methodology; the RISC (Reduced Instruction Set Computer); the VLSI testing program; silicon compilers for VLSI; and special

  2. VLSI design

    CERN Document Server

    Basu, D K

    2014-01-01

    Very Large Scale Integrated Circuits (VLSI) design has moved from costly curiosity to an everyday necessity, especially with the proliferated applications of embedded computing devices in communications, entertainment and household gadgets. As a result, more and more knowledge on various aspects of VLSI design technologies is becoming a necessity for the engineering/technology students of various disciplines. With this goal in mind the course material of this book has been designed to cover the various fundamental aspects of VLSI design, like Categorization and comparison between various technologies used for VLSI design Basic fabrication processes involved in VLSI design Design of MOS, CMOS and Bi CMOS circuits used in VLSI Structured design of VLSI Introduction to VHDL for VLSI design Automated design for placement and routing of VLSI systems VLSI testing and testability The various topics of the book have been discussed lucidly with analysis, when required, examples, figures and adequate analytical and the...

  3. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  4. An engineering methodology for implementing and testing VLSI (Very Large Scale Integrated) circuits

    Science.gov (United States)

    Corliss, Walter F., II

    1989-03-01

    The engineering methodology for producing a fully tested VLSI chip from a design layout is presented. A 16-bit correlator, NPS CORN88, that was previously designed, was used as a vehicle to demonstrate this methodology. The study of the design and simulation tools, MAGIC and MOSSIM II, was the focus of the design and validation process. The design was then implemented and the chip was fabricated by MOSIS. This fabricated chip was then used to develop a testing methodology for using the digital test facilities at NPS. NPS CORN88 was the first full custom VLSI chip, designed at NPS, to be tested with the NPS digital analysis system, Tektronix DAS 9100 series tester. The capabilities and limitations of these test facilities are examined. NPS CORN88 test results are included to demonstrate the capabilities of the digital test system. A translator, MOS2DAS, was developed to convert the MOSSIM II simulation program to the input files required by the DAS 9100 device verification software, 91DVS. Finally, a tutorial for using the digital test facilities, including the DAS 9100 and associated support equipments, is included as an appendix.

  5. VLSI scaling methods and low power CMOS buffer circuit

    International Nuclear Information System (INIS)

    Sharma Vijay Kumar; Pattanaik Manisha

    2013-01-01

    Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)

  6. VLSI electronics microstructure science

    CERN Document Server

    1982-01-01

    VLSI Electronics: Microstructure Science, Volume 4 reviews trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the silicon-on-insulator for VLSI and VHSIC, X-ray lithography, and transient response of electron transport in GaAs using the Monte Carlo method. The technology and manufacturing of high-density magnetic-bubble memories, metallic superlattices, challenge of education for VLSI, and impact of VLSI on medical signal processing are also elaborated. This text likewise covers the impact of VLSI t

  7. VLSI electronics microstructure science

    CERN Document Server

    1981-01-01

    VLSI Electronics: Microstructure Science, Volume 3 evaluates trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the impact of VLSI on computer architectures; VLSI design and design aid requirements; and design, fabrication, and performance of CCD imagers. The approaches, potential, and progress of ultra-high-speed GaAs VLSI; computer modeling of MOSFETs; and numerical physics of micron-length and submicron-length semiconductor devices are also elaborated. This text likewise covers the optical linewi

  8. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  9. Plasma processing for VLSI

    CERN Document Server

    Einspruch, Norman G

    1984-01-01

    VLSI Electronics: Microstructure Science, Volume 8: Plasma Processing for VLSI (Very Large Scale Integration) discusses the utilization of plasmas for general semiconductor processing. It also includes expositions on advanced deposition of materials for metallization, lithographic methods that use plasmas as exposure sources and for multiple resist patterning, and device structures made possible by anisotropic etching.This volume is divided into four sections. It begins with the history of plasma processing, a discussion of some of the early developments and trends for VLSI. The second section

  10. Computer-aided design of microfluidic very large scale integration (mVLSI) biochips design automation, testing, and design-for-testability

    CERN Document Server

    Hu, Kai; Ho, Tsung-Yi

    2017-01-01

    This book provides a comprehensive overview of flow-based, microfluidic VLSI. The authors describe and solve in a comprehensive and holistic manner practical challenges such as control synthesis, wash optimization, design for testability, and diagnosis of modern flow-based microfluidic biochips. They introduce practical solutions, based on rigorous optimization and formal models. The technical contributions presented in this book will not only shorten the product development cycle, but also accelerate the adoption and further development of modern flow-based microfluidic biochips, by facilitating the full exploitation of design complexities that are possible with current fabrication techniques. Offers the first practical problem formulation for automated control-layer design in flow-based microfluidic biochips and provides a systematic approach for solving this problem; Introduces a wash-optimization method for cross-contamination removal; Presents a design-for-testability (DfT) technique that can achieve 100...

  11. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  12. Parallel VLSI Architecture

    Science.gov (United States)

    Truong, T. K.; Reed, I.; Yeh, C.; Shao, H.

    1985-01-01

    Fermat number transformation convolutes two digital data sequences. Very-large-scale integration (VLSI) applications, such as image and radar signal processing, X-ray reconstruction, and spectrum shaping, linear convolution of two digital data sequences of arbitrary lenghts accomplished using Fermat number transform (ENT).

  13. vPELS: An E-Learning Social Environment for VLSI Design with Content Security Using DRM

    Science.gov (United States)

    Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn

    2014-01-01

    This article provides a proposal for personal e-learning system (vPELS [where "v" stands for VLSI: very large scale integrated circuit])) architecture in the context of social network environment for VLSI Design. The main objective of vPELS is to develop individual skills on a specific subject--say, VLSI--and share resources with peers.…

  14. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  15. Wavelength-encoded OCDMA system using opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  16. Wavelength-encoded OCDMA system using opto-VLSI processors

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  17. VLSI micro- and nanophotonics science, technology, and applications

    CERN Document Server

    Lee, El-Hang; Razeghi, Manijeh; Jagadish, Chennupati

    2011-01-01

    Addressing the growing demand for larger capacity in information technology, VLSI Micro- and Nanophotonics: Science, Technology, and Applications explores issues of science and technology of micro/nano-scale photonics and integration for broad-scale and chip-scale Very Large Scale Integration photonics. This book is a game-changer in the sense that it is quite possibly the first to focus on ""VLSI Photonics"". Very little effort has been made to develop integration technologies for micro/nanoscale photonic devices and applications, so this reference is an important and necessary early-stage pe

  18. Lithography requirements in complex VLSI device fabrication

    International Nuclear Information System (INIS)

    Wilson, A.D.

    1985-01-01

    Fabrication of complex very large scale integration (VLSI) circuits requires continual advances in lithography to satisfy: decreasing minimum linewidths, larger chip sizes, tighter linewidth and overlay control, increasing topography to linewidth ratios, higher yield demands, increased throughput, harsher device processing, lower lithography cost, and a larger part number set with quick turn-around time. Where optical, electron beam, x-ray, and ion beam lithography can be applied to judiciously satisfy the complex VLSI circuit fabrication requirements is discussed and those areas that are in need of major further advances are addressed. Emphasis will be placed on advanced electron beam and storage ring x-ray lithography

  19. VLSI design

    CERN Document Server

    Chandrasetty, Vikram Arkalgud

    2011-01-01

    This book provides insight into the practical design of VLSI circuits. It is aimed at novice VLSI designers and other enthusiasts who would like to understand VLSI design flows. Coverage includes key concepts in CMOS digital design, design of DSP and communication blocks on FPGAs, ASIC front end and physical design, and analog and mixed signal design. The approach is designed to focus on practical implementation of key elements of the VLSI design process, in order to make the topic accessible to novices. The design concepts are demonstrated using software from Mathworks, Xilinx, Mentor Graphic

  20. Heavy ion tests on programmable VLSI

    International Nuclear Information System (INIS)

    Provost-Grellier, A.

    1989-11-01

    The radiation from space environment induces operation damages in onboard computers systems. The definition of a strategy, for the Very Large Scale Integrated Circuitry (VLSI) qualification and choice, is needed. The 'upset' phenomena is known to be the most critical integrated circuit radiation effect. The strategies for testing integrated circuits are reviewed. A method and a test device were developed and applied to space applications candidate circuits. Cyclotron, synchrotron and Californium source experiments were carried out [fr

  1. Test methods of total dose effects in very large scale integrated circuits

    International Nuclear Information System (INIS)

    He Chaohui; Geng Bin; He Baoping; Yao Yujuan; Li Yonghong; Peng Honglun; Lin Dongsheng; Zhou Hui; Chen Yusheng

    2004-01-01

    A kind of test method of total dose effects (TDE) is presented for very large scale integrated circuits (VLSI). The consumption current of devices is measured while function parameters of devices (or circuits) are measured. Then the relation between data errors and consumption current can be analyzed and mechanism of TDE in VLSI can be proposed. Experimental results of 60 Co γ TDEs are given for SRAMs, EEPROMs, FLASH ROMs and a kind of CPU

  2. A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory.

    Science.gov (United States)

    Chicca, E; Badoni, D; Dante, V; D'Andreagiovanni, M; Salina, G; Carota, L; Fusi, S; Del Giudice, P

    2003-01-01

    Electronic neuromorphic devices with on-chip, on-line learning should be able to modify quickly the synaptic couplings to acquire information about new patterns to be stored (synaptic plasticity) and, at the same time, preserve this information on very long time scales (synaptic stability). Here, we illustrate the electronic implementation of a simple solution to this stability-plasticity problem, recently proposed and studied in various contexts. It is based on the observation that reducing the analog depth of the synapses to the extreme (bistable synapses) does not necessarily disrupt the performance of the device as an associative memory, provided that 1) the number of neurons is large enough; 2) the transitions between stable synaptic states are stochastic; and 3) learning is slow. The drastic reduction of the analog depth of the synaptic variable also makes this solution appealing from the point of view of electronic implementation and offers a simple methodological alternative to the technological solution based on floating gates. We describe the full custom analog very large-scale integration (VLSI) realization of a small network of integrate-and-fire neurons connected by bistable deterministic plastic synapses which can implement the idea of stochastic learning. In the absence of stimuli, the memory is preserved indefinitely. During the stimulation the synapse undergoes quick temporary changes through the activities of the pre- and postsynaptic neurons; those changes stochastically result in a long-term modification of the synaptic efficacy. The intentionally disordered pattern of connectivity allows the system to generate a randomness suited to drive the stochastic selection mechanism. We check by a suitable stimulation protocol that the stochastic synaptic plasticity produces the expected pattern of potentiation and depression in the electronic network.

  3. Electro-optic techniques for VLSI interconnect

    Science.gov (United States)

    Neff, J. A.

    1985-03-01

    A major limitation to achieving significant speed increases in very large scale integration (VLSI) lies in the metallic interconnects. They are costly not only from the charge transport standpoint but also from capacitive loading effects. The Defense Advanced Research Projects Agency, in pursuit of the fifth generation supercomputer, is investigating alternatives to the VLSI metallic interconnects, especially the use of optical techniques to transport the information either inter or intrachip. As the on chip performance of VLSI continues to improve via the scale down of the logic elements, the problems associated with transferring data off and onto the chip become more severe. The use of optical carriers to transfer the information within the computer is very appealing from several viewpoints. Besides the potential for gigabit propagation rates, the conversion from electronics to optics conveniently provides a decoupling of the various circuits from one another. Significant gains will also be realized in reducing cross talk between the metallic routings, and the interconnects need no longer be constrained to the plane of a thin film on the VLSI chip. In addition, optics can offer an increased programming flexibility for restructuring the interconnect network.

  4. VLSI Architectures for the Multiplication of Integers Modulo a Fermat Number

    Science.gov (United States)

    Chang, J. J.; Truong, T. K.; Reed, I. S.; Hsu, I. S.

    1984-01-01

    Multiplication is central in the implementation of Fermat number transforms and other residue number algorithms. There is need for a good multiplication algorithm that can be realized easily on a very large scale integration (VLSI) chip. The Leibowitz multiplier is modified to realize multiplication in the ring of integers modulo a Fermat number. This new algorithm requires only a sequence of cyclic shifts and additions. The designs developed for this new multiplier are regular, simple, expandable, and, therefore, suitable for VLSI implementation.

  5. NASA Space Engineering Research Center for VLSI systems design

    Science.gov (United States)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  6. Hybrid VLSI/QCA Architecture for Computing FFTs

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  7. Mixed-Dimensionality VLSI-Type Configurable Tools for Virtual Prototyping of Biomicrofluidic Devices and Integrated Systems

    Science.gov (United States)

    Makhijani, Vinod B.; Przekwas, Andrzej J.

    2002-10-01

    This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).

  8. VLSI in medicine

    CERN Document Server

    Einspruch, Norman G

    1989-01-01

    VLSI Electronics Microstructure Science, Volume 17: VLSI in Medicine deals with the more important applications of VLSI in medical devices and instruments.This volume is comprised of 11 chapters. It begins with an article about medical electronics. The following three chapters cover diagnostic imaging, focusing on such medical devices as magnetic resonance imaging, neurometric analyzer, and ultrasound. Chapters 5, 6, and 7 present the impact of VLSI in cardiology. The electrocardiograph, implantable cardiac pacemaker, and the use of VLSI in Holter monitoring are detailed in these chapters. The

  9. A second generation 50 Mbps VLSI level zero processing system prototype

    Science.gov (United States)

    Harris, Jonathan C.; Shi, Jeff; Speciale, Nick; Bennett, Toby

    1994-01-01

    Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies.

  10. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  11. Emerging Applications for High K Materials in VLSI Technology

    Science.gov (United States)

    Clark, Robert D.

    2014-01-01

    The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI) manufacturing for leading edge Dynamic Random Access Memory (DRAM) and Complementary Metal Oxide Semiconductor (CMOS) applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM) diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD) is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing. PMID:28788599

  12. Emerging Applications for High K Materials in VLSI Technology

    Directory of Open Access Journals (Sweden)

    Robert D. Clark

    2014-04-01

    Full Text Available The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI manufacturing for leading edge Dynamic Random Access Memory (DRAM and Complementary Metal Oxide Semiconductor (CMOS applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing.

  13. A VLSI image processor via pseudo-mersenne transforms

    International Nuclear Information System (INIS)

    Sei, W.J.; Jagadeesh, J.M.

    1986-01-01

    The computational burden on image processing in medical fields where a large amount of information must be processed quickly and accurately has led to consideration of special-purpose image processor chip design for some time. The very large scale integration (VLSI) resolution has made it cost-effective and feasible to consider the design of special purpose chips for medical imaging fields. This paper describes a VLSI CMOS chip suitable for parallel implementation of image processing algorithms and cyclic convolutions by using Pseudo-Mersenne Number Transform (PMNT). The main advantages of the PMNT over the Fast Fourier Transform (FFT) are: (1) no multiplications are required; (2) integer arithmetic is used. The design and development of this processor, which operates on 32-point convolution or 5 x 5 window image, are described

  14. Multi-net optimization of VLSI interconnect

    CERN Document Server

    Moiseev, Konstantin; Wimer, Shmuel

    2015-01-01

    This book covers layout design and layout migration methodologies for optimizing multi-net wire structures in advanced VLSI interconnects. Scaling-dependent models for interconnect power, interconnect delay and crosstalk noise are covered in depth, and several design optimization problems are addressed, such as minimization of interconnect power under delay constraints, or design for minimal delay in wire bundles within a given routing area. A handy reference or a guide for design methodologies and layout automation techniques, this book provides a foundation for physical design challenges of interconnect in advanced integrated circuits.  • Describes the evolution of interconnect scaling and provides new techniques for layout migration and optimization, focusing on multi-net optimization; • Presents research results that provide a level of design optimization which does not exist in commercially-available design automation software tools; • Includes mathematical properties and conditions for optimal...

  15. Development of an integrated circuit VLSI used for time measurement and selective read out in the front end electronics of the DIRC for the Babar experience at SLAC; Developpement d'un circuit integre VLSI assurant mesure de temps et lecture selective dans l'electronique frontale du compteur DIRC de l'experience babar a slac

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1999-07-01

    This thesis deals with the design the development and the tests of an integrated circuit VLSI, supplying selective read and time measure for 16 channels. This circuit has been developed for a experiment of particles physics, BABAR, that will take place at SLAC (Stanford Linear Accelerator Center). A first part describes the physical stakes of the experiment, the electronic architecture and the place of the developed circuit in the research program. The second part presents the technical drawings of the circuit, the prototypes leading to the final design and the validity tests. (A.L.B.)

  16. VLSI Design of Trusted Virtual Sensors

    Directory of Open Access Journals (Sweden)

    Macarena C. Martínez-Rodríguez

    2018-01-01

    Full Text Available This work presents a Very Large Scale Integration (VLSI design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF based on a Static Random Access Memory (SRAM to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time.

  17. VLSI Design of Trusted Virtual Sensors.

    Science.gov (United States)

    Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada

    2018-01-25

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).

  18. Microfluidic very large-scale integration for biochips: Technology, testing and fault-tolerant design

    DEFF Research Database (Denmark)

    Araci, Ismail Emre; Pop, Paul; Chakrabarty, Krishnendu

    2015-01-01

    of this paper is on continuous-flow biochips, where the basic building block is a microvalve. By combining these microvalves, more complex units such as mixers, switches, multiplexers can be built, hence the name of the technology, “microfluidic Very Large-Scale Integration” (mVLSI). A roadblock......Microfluidic biochips are replacing the conventional biochemical analyzers by integrating all the necessary functions for biochemical analysis using microfluidics. Biochips are used in many application areas, such as, in vitro diagnostics, drug discovery, biotech and ecology. The focus...... presents the state-of-the-art in the mVLSI platforms and emerging research challenges in the area of continuous-flow microfluidics, focusing on testing techniques and fault-tolerant design....

  19. Lithography for VLSI

    CERN Document Server

    Einspruch, Norman G

    1987-01-01

    VLSI Electronics Microstructure Science, Volume 16: Lithography for VLSI treats special topics from each branch of lithography, and also contains general discussion of some lithographic methods.This volume contains 8 chapters that discuss the various aspects of lithography. Chapters 1 and 2 are devoted to optical lithography. Chapter 3 covers electron lithography in general, and Chapter 4 discusses electron resist exposure modeling. Chapter 5 presents the fundamentals of ion-beam lithography. Mask/wafer alignment for x-ray proximity printing and for optical lithography is tackled in Chapter 6.

  20. VLSI Design of SVM-Based Seizure Detection System With On-Chip Learning Capability.

    Science.gov (United States)

    Feng, Lichen; Li, Zunchao; Wang, Yuanfa

    2018-02-01

    Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time-frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.

  1. Extraction of MOS VLSI (Very-Large-Scale-Integrated) Circuit Models Including Critical Interconnect Parasitics.

    Science.gov (United States)

    1987-09-01

    level descrip- tion without human intervention. Although design rules and the layout function may not be checked, performance verification is still a...digital syvstems.- Proc. I1E1., vol. 69. no. 10. pp. 1200-1211. October 198 1. [2] A. Gupta, AT A circuit extractor." Proc. 20th Design Automiation

  2. Assimilation of Biophysical Neuronal Dynamics in Neuromorphic VLSI.

    Science.gov (United States)

    Wang, Jun; Breen, Daniel; Akinin, Abraham; Broccard, Frederic; Abarbanel, Henry D I; Cauwenberghs, Gert

    2017-12-01

    Representing the biophysics of neuronal dynamics and behavior offers a principled analysis-by-synthesis approach toward understanding mechanisms of nervous system functions. We report on a set of procedures assimilating and emulating neurobiological data on a neuromorphic very large scale integrated (VLSI) circuit. The analog VLSI chip, NeuroDyn, features 384 digitally programmable parameters specifying for 4 generalized Hodgkin-Huxley neurons coupled through 12 conductance-based chemical synapses. The parameters also describe reversal potentials, maximal conductances, and spline regressed kinetic functions for ion channel gating variables. In one set of experiments, we assimilated membrane potential recorded from one of the neurons on the chip to the model structure upon which NeuroDyn was designed using the known current input sequence. We arrived at the programmed parameters except for model errors due to analog imperfections in the chip fabrication. In a related set of experiments, we replicated songbird individual neuron dynamics on NeuroDyn by estimating and configuring parameters extracted using data assimilation from intracellular neural recordings. Faithful emulation of detailed biophysical neural dynamics will enable the use of NeuroDyn as a tool to probe electrical and molecular properties of functional neural circuits. Neuroscience applications include studying the relationship between molecular properties of neurons and the emergence of different spike patterns or different brain behaviors. Clinical applications include studying and predicting effects of neuromodulators or neurodegenerative diseases on ion channel kinetics.

  3. The VLSI handbook

    CERN Document Server

    Chen, Wai-Kai

    2007-01-01

    Written by a stellar international panel of expert contributors, this handbook remains the most up-to-date, reliable, and comprehensive source for real answers to practical problems. In addition to updated information in most chapters, this edition features several heavily revised and completely rewritten chapters, new chapters on such topics as CMOS fabrication and high-speed circuit design, heavily revised sections on testing of digital systems and design languages, and two entirely new sections on low-power electronics and VLSI signal processing. An updated compendium of references and othe

  4. UW VLSI chip tester

    Science.gov (United States)

    McKenzie, Neil

    1989-12-01

    We present a design for a low-cost, functional VLSI chip tester. It is based on the Apple MacIntosh II personal computer. It tests chips that have up to 128 pins. All pin drivers of the tester are bidirectional; each pin is programmed independently as an input or an output. The tester can test both static and dynamic chips. Rudimentary speed testing is provided. Chips are tested by executing C programs written by the user. A software library is provided for program development. Tests run under both the Mac Operating System and A/UX. The design is implemented using Xilinx Logic Cell Arrays. Price/performance tradeoffs are discussed.

  5. VLSI signal processing technology

    CERN Document Server

    Swartzlander, Earl

    1994-01-01

    This book is the first in a set of forthcoming books focussed on state-of-the-art development in the VLSI Signal Processing area. It is a response to the tremendous research activities taking place in that field. These activities have been driven by two factors: the dramatic increase in demand for high speed signal processing, especially in consumer elec­ tronics, and the evolving microelectronic technologies. The available technology has always been one of the main factors in determining al­ gorithms, architectures, and design strategies to be followed. With every new technology, signal processing systems go through many changes in concepts, design methods, and implementation. The goal of this book is to introduce the reader to the main features of VLSI Signal Processing and the ongoing developments in this area. The focus of this book is on: • Current developments in Digital Signal Processing (DSP) pro­ cessors and architectures - several examples and case studies of existing DSP chips are discussed in...

  6. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  7. Nano lasers in photonic VLSI

    NARCIS (Netherlands)

    Hill, M.T.; Oei, Y.S.; Smit, M.K.

    2007-01-01

    We examine the use of micro and nano lasers to form digital photonic VLSI building blocks. Problems such as isolation and cascading of building blocks are addressed, and the potential of future nano lasers explored.

  8. Multi-valued LSI/VLSI logic design

    Science.gov (United States)

    Santrakul, K.

    A procedure for synthesizing any large complex logic system, such as LSI and VLSI integrated circuits is described. This scheme uses Multi-Valued Multi-plexers (MVMUX) as the basic building blocks and the tree as the structure of the circuit realization. Simple built-in test circuits included in the network (the main current), provide a thorough functional checking of the network at any time. In brief, four major contributions are made: (1) multi-valued Algorithmic State Machine (ASM) chart for describing an LSI/VLSI behavior; (2) a tree-structured multi-valued multiplexer network which can be obtained directly from an ASM chart; (3) a heuristic tree-structured synthesis method for realizing any combinational logic with minimal or nearly-minimal MVMUX; and (4) a hierarchical design of LSI/VLSI with built-in parallel testing capability.

  9. Positron emission tomographic images and expectation maximization: A VLSI architecture for multiple iterations per second

    International Nuclear Information System (INIS)

    Jones, W.F.; Byars, L.G.; Casey, M.E.

    1988-01-01

    A digital electronic architecture for parallel processing of the expectation maximization (EM) algorithm for Positron Emission tomography (PET) image reconstruction is proposed. Rapid (0.2 second) EM iterations on high resolution (256 x 256) images are supported. Arrays of two very large scale integration (VLSI) chips perform forward and back projection calculations. A description of the architecture is given, including data flow and partitioning relevant to EM and parallel processing. EM images shown are produced with software simulating the proposed hardware reconstruction algorithm. Projected cost of the system is estimated to be small in comparison to the cost of current PET scanners

  10. 1 million-Q optomechanical microdisk resonators for sensing with very large scale integration

    Science.gov (United States)

    Hermouet, M.; Sansa, M.; Banniard, L.; Fafin, A.; Gely, M.; Allain, P. E.; Santos, E. Gil; Favero, I.; Alava, T.; Jourdan, G.; Hentz, S.

    2018-02-01

    Cavity optomechanics have become a promising route towards the development of ultrasensitive sensors for a wide range of applications including mass, chemical and biological sensing. In this study, we demonstrate the potential of Very Large Scale Integration (VLSI) with state-of-the-art low-loss performance silicon optomechanical microdisks for sensing applications. We report microdisks exhibiting optical Whispering Gallery Modes (WGM) with 1 million quality factors, yielding high displacement sensitivity and strong coupling between optical WGMs and in-plane mechanical Radial Breathing Modes (RBM). Such high-Q microdisks with mechanical resonance frequencies in the 102 MHz range were fabricated on 200 mm wafers with Variable Shape Electron Beam lithography. Benefiting from ultrasensitive readout, their Brownian motion could be resolved with good Signal-to-Noise ratio at ambient pressure, as well as in liquid, despite high frequency operation and large fluidic damping: the mechanical quality factor reduced from few 103 in air to 10's in liquid, and the mechanical resonance frequency shifted down by a few percent. Proceeding one step further, we performed an all-optical operation of the resonators in air using a pump-probe scheme. Our results show our VLSI process is a viable approach for the next generation of sensors operating in vacuum, gas or liquid phase.

  11. Scaling of graphene integrated circuits.

    Science.gov (United States)

    Bianchi, Massimiliano; Guerriero, Erica; Fiocco, Marco; Alberti, Ruggero; Polloni, Laura; Behnam, Ashkan; Carrion, Enrique A; Pop, Eric; Sordan, Roman

    2015-05-07

    The influence of transistor size reduction (scaling) on the speed of realistic multi-stage integrated circuits (ICs) represents the main performance metric of a given transistor technology. Despite extensive interest in graphene electronics, scaling efforts have so far focused on individual transistors rather than multi-stage ICs. Here we study the scaling of graphene ICs based on transistors from 3.3 to 0.5 μm gate lengths and with different channel widths, access lengths, and lead thicknesses. The shortest gate delay of 31 ps per stage was obtained in sub-micron graphene ROs oscillating at 4.3 GHz, which is the highest oscillation frequency obtained in any strictly low-dimensional material to date. We also derived the fundamental Johnson limit, showing that scaled graphene ICs could be used at high frequencies in applications with small voltage swing.

  12. How complex can integrated optical circuits become?

    NARCIS (Netherlands)

    Smit, M.K.; Hill, M.T.; Baets, R.G.F.; Bente, E.A.J.M.; Dorren, H.J.S.; Karouta, F.; Koenraad, P.M.; Koonen, A.M.J.; Leijtens, X.J.M.; Nötzel, R.; Oei, Y.S.; Waardt, de H.; Tol, van der J.J.G.M.; Khoe, G.D.

    2007-01-01

    The integration scale in Photonic Integrated Circuits will be pushed to VLSI-level in the coming decade. This will bring major changes in both application and manufacturing. In this paper developments in Photonic Integration are reviewed and the limits for reduction of device demensions are

  13. VLSI implementations for image communications

    CERN Document Server

    Pirsch, P

    1993-01-01

    The past few years have seen a rapid growth in image processing and image communication technologies. New video services and multimedia applications are continuously being designed. Essential for all these applications are image and video compression techniques. The purpose of this book is to report on recent advances in VLSI architectures and their implementation for video signal processing applications with emphasis on video coding for bit rate reduction. Efficient VLSI implementation for video signal processing spans a broad range of disciplines involving algorithms, architectures, circuits

  14. VLSI System Implementation of 200 MHz, 8-bit, 90nm CMOS Arithmetic and Logic Unit (ALU Processor Controller

    Directory of Open Access Journals (Sweden)

    Fazal NOORBASHA

    2012-08-01

    Full Text Available In this present study includes the Very Large Scale Integration (VLSI system implementation of 200MHz, 8-bit, 90nm Complementary Metal Oxide Semiconductor (CMOS Arithmetic and Logic Unit (ALU processor control with logic gate design style and 0.12µm six metal 90nm CMOS fabrication technology. The system blocks and the behaviour are defined and the logical design is implemented in gate level in the design phase. Then, the logic circuits are simulated and the subunits are converted in to 90nm CMOS layout. Finally, in order to construct the VLSI system these units are placed in the floor plan and simulated with analog and digital, logic and switch level simulators. The results of the simulations indicates that the VLSI system can control different instructions which can divided into sub groups: transfer instructions, arithmetic and logic instructions, rotate and shift instructions, branch instructions, input/output instructions, control instructions. The data bus of the system is 16-bit. It runs at 200MHz, and operating power is 1.2V. In this paper, the parametric analysis of the system, the design steps and obtained results are explained.

  15. Artificial immune system algorithm in VLSI circuit configuration

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.

  16. Steffensen's Integral Inequality on Time Scales

    Directory of Open Access Journals (Sweden)

    Ozkan Umut Mutlu

    2007-01-01

    Full Text Available We establish generalizations of Steffensen's integral inequality on time scales via the diamond- dynamic integral, which is defined as a linear combination of the delta and nabla integrals.

  17. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    Science.gov (United States)

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  18. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  19. A multichip aVLSI system emulating orientation selectivity of primary visual cortical cells.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2005-07-01

    In this paper, we designed and fabricated a multichip neuromorphic analog very large scale integrated (aVLSI) system, which emulates the orientation selective response of the simple cell in the primary visual cortex. The system consists of a silicon retina and an orientation chip. An image, which is filtered by a concentric center-surround (CS) antagonistic receptive field of the silicon retina, is transferred to the orientation chip. The image transfer from the silicon retina to the orientation chip is carried out with analog signals. The orientation chip selectively aggregates multiple pixels of the silicon retina, mimicking the feedforward model proposed by Hubel and Wiesel. The chip provides the orientation-selective (OS) outputs which are tuned to 0 degrees, 60 degrees, and 120 degrees. The feed-forward aggregation reduces the fixed pattern noise that is due to the mismatch of the transistors in the orientation chip. The spatial properties of the orientation selective response were examined in terms of the adjustable parameters of the chip, i.e., the number of aggregated pixels and size of the receptive field of the silicon retina. The multichip aVLSI architecture used in the present study can be applied to implement higher order cells such as the complex cell of the primary visual cortex.

  20. VLSI Architecture and Design

    OpenAIRE

    Johnsson, Lennart

    1980-01-01

    Integrated circuit technology is rapidly approaching a state where feature sizes of one micron or less are tractable. Chip sizes are increasing slowly. These two developments result in considerably increased complexity in chip design. The physical characteristics of integrated circuit technology are also changing. The cost of communication will be dominating making new architectures and algorithms both feasible and desirable. A large number of processors on a single chip will be possible....

  1. Advanced symbolic analysis for VLSI systems methods and applications

    CERN Document Server

    Shi, Guoyong; Tlelo Cuautle, Esteban

    2014-01-01

    This book provides comprehensive coverage of the recent advances in symbolic analysis techniques for design automation of nanometer VLSI systems. The presentation is organized in parts of fundamentals, basic implementation methods and applications for VLSI design. Topics emphasized include  statistical timing and crosstalk analysis, statistical and parallel analysis, performance bound analysis and behavioral modeling for analog integrated circuits . Among the recent advances, the Binary Decision Diagram (BDD) based approaches are studied in depth. The BDD-based hierarchical symbolic analysis approaches, have essentially broken the analog circuit size barrier. In particular, this book   • Provides an overview of classical symbolic analysis methods and a comprehensive presentation on the modern  BDD-based symbolic analysis techniques; • Describes detailed implementation strategies for BDD-based algorithms, including the principles of zero-suppression, variable ordering and canonical reduction; • Int...

  2. VLSI Design with Alliance Free CAD Tools: an Implementation Example

    Directory of Open Access Journals (Sweden)

    Chávez-Bracamontes Ramón

    2015-07-01

    Full Text Available This paper presents the methodology used for a digital integrated circuit design that implements the communication protocol known as Serial Peripheral Interface, using the Alliance CAD System. The aim of this paper is to show how the work of VLSI design can be done by graduate and undergraduate students with minimal resources and experience. The physical design was sent to be fabricated using the CMOS AMI C5 process that features 0.5 micrometer in transistor size, sponsored by the MOSIS Educational Program. Tests were made on a platform that transfers data from inertial sensor measurements to the designed SPI chip, which in turn sends the data back on a parallel bus to a common microcontroller. The results show the efficiency of the employed methodology in VLSI design, as well as the feasibility of ICs manufacturing from school projects that have insufficient or no source of funding

  3. Adaptive WTA with an analog VLSI neuromorphic learning chip.

    Science.gov (United States)

    Häfliger, Philipp

    2007-03-01

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.

  4. Synthesis of on-chip control circuits for mVLSI biochips

    DEFF Research Database (Denmark)

    Potluri, Seetal; Schneider, Alexander Rüdiger; Hørslev-Petersen, Martin

    2017-01-01

    them to laboratory environments. To address this issue, researchers have proposed methods to reduce the number of offchip pressure sources, through integration of on-chip pneumatic control logic circuits fabricated using three-layer monolithic membrane valve technology. Traditionally, mVLSI biochip......-chip control circuit design and (iii) the integration of on-chip control in the placement and routing design tasks. In this paper we present a design methodology for logic synthesis and physical synthesis of mVLSI biochips that use on-chip control. We show how the proposed methodology can be successfully...... applied to generate biochip layouts with integrated on-chip pneumatic control....

  5. VLSI structures for track finding

    International Nuclear Information System (INIS)

    Dell'Orso, M.

    1989-01-01

    We discuss the architecture of a device based on the concept of associative memory designed to solve the track finding problem, typical of high energy physics experiments, in a time span of a few microseconds even for very high multiplicity events. This ''machine'' is implemented as a large array of custom VLSI chips. All the chips are equal and each of them stores a number of ''patterns''. All the patterns in all the chips are compared in parallel to the data coming from the detector while the detector is being read out. (orig.)

  6. The test of VLSI circuits

    Science.gov (United States)

    Baviere, Ph.

    Tests which have proven effective for evaluating VLSI circuits for space applications are described. It is recommended that circuits be examined after each manfacturing step to gain fast feedback on inadequacies in the production system. Data from failure modes which occur during operational lifetimes of circuits also permit redefinition of the manufacturing and quality control process to eliminate the defects identified. Other tests include determination of the operational envelope of the circuits, examination of the circuit response to controlled inputs, and the performance and functional speeds of ROM and RAM memories. Finally, it is desirable that all new circuits be designed with testing in mind.

  7. Surface and interface effects in VLSI

    CERN Document Server

    Einspruch, Norman G

    1985-01-01

    VLSI Electronics Microstructure Science, Volume 10: Surface and Interface Effects in VLSI provides the advances made in the science of semiconductor surface and interface as they relate to electronics. This volume aims to provide a better understanding and control of surface and interface related properties. The book begins with an introductory chapter on the intimate link between interfaces and devices. The book is then divided into two parts. The first part covers the chemical and geometric structures of prototypical VLSI interfaces. Subjects detailed include, the technologically most import

  8. Modularly Integrated MEMS Technology

    National Research Council Canada - National Science Library

    Eyoum, Marie-Angie N

    2006-01-01

    Process design, development and integration to fabricate reliable MEMS devices on top of VLSI-CMOS electronics without damaging the underlying circuitry have been investigated throughout this dissertation...

  9. VLSI Technology for Cognitive Radio

    Science.gov (United States)

    VIJAYALAKSHMI, B.; SIDDAIAH, P.

    2017-08-01

    One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.

  10. Formal verification an essential toolkit for modern VLSI design

    CERN Document Server

    Seligman, Erik; Kumar, M V Achutha Kiran

    2015-01-01

    Formal Verification: An Essential Toolkit for Modern VLSI Design presents practical approaches for design and validation, with hands-on advice for working engineers integrating these techniques into their work. Building on a basic knowledge of System Verilog, this book demystifies FV and presents the practical applications that are bringing it into mainstream design and validation processes at Intel and other companies. The text prepares readers to effectively introduce FV in their organization and deploy FV techniques to increase design and validation productivity. Presents formal verific

  11. An Asynchronous Low Power and High Performance VLSI Architecture for Viterbi Decoder Implemented with Quasi Delay Insensitive Templates

    Directory of Open Access Journals (Sweden)

    T. Kalavathi Devi

    2015-01-01

    Full Text Available Convolutional codes are comprehensively used as Forward Error Correction (FEC codes in digital communication systems. For decoding of convolutional codes at the receiver end, Viterbi decoder is often used to have high priority. This decoder meets the demand of high speed and low power. At present, the design of a competent system in Very Large Scale Integration (VLSI technology requires these VLSI parameters to be finely defined. The proposed asynchronous method focuses on reducing the power consumption of Viterbi decoder for various constraint lengths using asynchronous modules. The asynchronous designs are based on commonly used Quasi Delay Insensitive (QDI templates, namely, Precharge Half Buffer (PCHB and Weak Conditioned Half Buffer (WCHB. The functionality of the proposed asynchronous design is simulated and verified using Tanner Spice (TSPICE in 0.25 µm, 65 nm, and 180 nm technologies of Taiwan Semiconductor Manufacture Company (TSMC. The simulation result illustrates that the asynchronous design techniques have 25.21% of power reduction compared to synchronous design and work at a speed of 475 MHz.

  12. Development of Radhard VLSI electronics for SSC calorimeters

    International Nuclear Information System (INIS)

    Dawson, J.W.; Nodulman, L.J.

    1989-01-01

    A new program of development of integrated electronics for liquid argon calorimeters in the SSC detector environment is being started at Argonne National Laboratory. Scientists from Brookhaven National Laboratory and Vanderbilt University together with an industrial participants are expected to collaborate in this work. Interaction rates, segmentation, and the radiation environment dictate that front-end electronics of SSC calorimeters must be implemented in the form of highly integrated, radhard, analog, low noise, VLSI custom monolithic devices. Important considerations are power dissipation, choice of functions integrated on the front-end chips, and cabling requirements. An extensive level of expertise in radhard electronics exists within the industrial community, and a primary objective of this work is to bring that expertise to bear on the problems of SSC detector design. Radiation hardness measurements and requirements as well as calorimeter design will be primarily the responsibility of Argonne scientists and our Brookhaven and Vanderbilt colleagues. Radhard VLSI design and fabrication will be primarily the industrial participant's responsibility. The rapid-cycling synchrotron at Argonne will be used for radiation damage studies involving response to neutrons and charged particles, while damage from gammas will be investigated at Brookhaven. 10 refs., 6 figs., 2 tabs

  13. Fast-prototyping of VLSI

    International Nuclear Information System (INIS)

    Saucier, G.; Read, E.

    1987-01-01

    Fast-prototyping will be a reality in the very near future if both straightforward design methods and fast manufacturing facilities are available. This book focuses, first, on the motivation for fast-prototyping. Economic aspects and market considerations are analysed by European and Japanese companies. In the second chapter, new design methods are identified, mainly for full custom circuits. Of course, silicon compilers play a key role and the introduction of artificial intelligence techniques sheds a new light on the subject. At present, fast-prototyping on gate arrays or on standard cells is the most conventional technique and the third chapter updates the state-of-the art in this area. The fourth chapter concentrates specifically on the e-beam direct-writing for submicron IC technologies. In the fifth chapter, a strategic point in fast-prototyping, namely the test problem is addressed. The design for testability and the interface to the test equipment are mandatory to fulfill the test requirement for fast-prototyping. Finally, the last chapter deals with the subject of education when many people complain about the lack of use of fast-prototyping in higher education for VLSI

  14. High performance VLSI telemetry data systems

    Science.gov (United States)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  15. Compact MOSFET models for VLSI design

    CERN Document Server

    Bhattacharyya, A B

    2009-01-01

    Practicing designers, students, and educators in the semiconductor field face an ever expanding portfolio of MOSFET models. In Compact MOSFET Models for VLSI Design , A.B. Bhattacharyya presents a unified perspective on the topic, allowing the practitioner to view and interpret device phenomena concurrently using different modeling strategies. Readers will learn to link device physics with model parameters, helping to close the gap between device understanding and its use for optimal circuit performance. Bhattacharyya also lays bare the core physical concepts that will drive the future of VLSI.

  16. Vision for single flux quantum very large scale integrated technology

    International Nuclear Information System (INIS)

    Silver, Arnold; Bunyk, Paul; Kleinsasser, Alan; Spargo, John

    2006-01-01

    Single flux quantum (SFQ) electronics is extremely fast and has very low on-chip power dissipation. SFQ VLSI is an excellent candidate for high-performance computing and other applications requiring extremely high-speed signal processing. Despite this, SFQ technology has generally not been accepted for system implementation. We argue that this is due, at least in part, to the use of outdated tools to produce SFQ circuits and chips. Assuming the use of tools equivalent to those employed in the semiconductor industry, we estimate the density of Josephson junctions, circuit speed, and power dissipation that could be achieved with SFQ technology. Today, CMOS lithography is at 90-65 nm with about 20 layers. Assuming equivalent technology, aggressively increasing the current density above 100 kA cm -2 to achieve junction speeds approximately 1000 GHz, and reducing device footprints by converting device profiles from planar to vertical, one could expect to integrate about 250 M Josephson junctions cm -2 into SFQ digital circuits. This should enable circuit operation with clock frequencies above 200 GHz and place approximately 20 K gates within a radius of one clock period. As a result, complete microprocessors, including integrated memory registers, could be fabricated on a single chip

  17. Vision for single flux quantum very large scale integrated technology

    Energy Technology Data Exchange (ETDEWEB)

    Silver, Arnold [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Bunyk, Paul [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Kleinsasser, Alan [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States); Spargo, John [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States)

    2006-05-15

    Single flux quantum (SFQ) electronics is extremely fast and has very low on-chip power dissipation. SFQ VLSI is an excellent candidate for high-performance computing and other applications requiring extremely high-speed signal processing. Despite this, SFQ technology has generally not been accepted for system implementation. We argue that this is due, at least in part, to the use of outdated tools to produce SFQ circuits and chips. Assuming the use of tools equivalent to those employed in the semiconductor industry, we estimate the density of Josephson junctions, circuit speed, and power dissipation that could be achieved with SFQ technology. Today, CMOS lithography is at 90-65 nm with about 20 layers. Assuming equivalent technology, aggressively increasing the current density above 100 kA cm{sup -2} to achieve junction speeds approximately 1000 GHz, and reducing device footprints by converting device profiles from planar to vertical, one could expect to integrate about 250 M Josephson junctions cm{sup -2} into SFQ digital circuits. This should enable circuit operation with clock frequencies above 200 GHz and place approximately 20 K gates within a radius of one clock period. As a result, complete microprocessors, including integrated memory registers, could be fabricated on a single chip.

  18. Avionic Data Bus Integration Technology

    Science.gov (United States)

    1991-12-01

    address the hardware-software interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion ...the SCP. In 1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error... MULTIVERSION PROGRAMMING. N-version programming. 226 N-VERSION PROGRAMMING. The independent coding of a number, N, of redundant computer programs that

  19. Wafer integrated micro-scale concentrating photovoltaics

    Science.gov (United States)

    Gu, Tian; Li, Duanhui; Li, Lan; Jared, Bradley; Keeler, Gordon; Miller, Bill; Sweatt, William; Paap, Scott; Saavedra, Michael; Das, Ujjwal; Hegedus, Steve; Tauke-Pedretti, Anna; Hu, Juejun

    2017-09-01

    Recent development of a novel micro-scale PV/CPV technology is presented. The Wafer Integrated Micro-scale PV approach (WPV) seamlessly integrates multijunction micro-cells with a multi-functional silicon platform that provides optical micro-concentration, hybrid photovoltaic, and mechanical micro-assembly. The wafer-embedded micro-concentrating elements is shown to considerably improve the concentration-acceptance-angle product, potentially leading to dramatically reduced module materials and fabrication costs, sufficient angular tolerance for low-cost trackers, and an ultra-compact optical architecture, which makes the WPV module compatible with commercial flat panel infrastructures. The PV/CPV hybrid architecture further allows the collection of both direct and diffuse sunlight, thus extending the geographic and market domains for cost-effective PV system deployment. The WPV approach can potentially benefits from both the high performance of multijunction cells and the low cost of flat plate Si PV systems.

  20. A Knowledge Based Approach to VLSI CAD

    Science.gov (United States)

    1983-09-01

    Avail-and/or Dist ISpecial L| OI. SEICURITY CLASIIrCATION OP THIS IPA.lErllm S Daene." A KNOwLEDE BASED APPROACH TO VLSI CAD’ Louis L Steinberg and...major issues lies in building up and managing the knowledge base of oesign expertise. We expect that, as with many recent expert systems, in order to

  1. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  2. CMOS VLSI Active-Pixel Sensor for Tracking

    Science.gov (United States)

    Pain, Bedabrata; Sun, Chao; Yang, Guang; Heynssens, Julie

    2004-01-01

    An architecture for a proposed active-pixel sensor (APS) and a design to implement the architecture in a complementary metal oxide semiconductor (CMOS) very-large-scale integrated (VLSI) circuit provide for some advanced features that are expected to be especially desirable for tracking pointlike features of stars. The architecture would also make this APS suitable for robotic- vision and general pointing and tracking applications. CMOS imagers in general are well suited for pointing and tracking because they can be configured for random access to selected pixels and to provide readout from windows of interest within their fields of view. However, until now, the architectures of CMOS imagers have not supported multiwindow operation or low-noise data collection. Moreover, smearing and motion artifacts in collected images have made prior CMOS imagers unsuitable for tracking applications. The proposed CMOS imager (see figure) would include an array of 1,024 by 1,024 pixels containing high-performance photodiode-based APS circuitry. The pixel pitch would be 9 m. The operations of the pixel circuits would be sequenced and otherwise controlled by an on-chip timing and control block, which would enable the collection of image data, during a single frame period, from either the full frame (that is, all 1,024 1,024 pixels) or from within as many as 8 different arbitrarily placed windows as large as 8 by 8 pixels each. A typical prior CMOS APS operates in a row-at-a-time ( grolling-shutter h) readout mode, which gives rise to exposure skew. In contrast, the proposed APS would operate in a sample-first/readlater mode, suppressing rolling-shutter effects. In this mode, the analog readout signals from the pixels corresponding to the windows of the interest (which windows, in the star-tracking application, would presumably contain guide stars) would be sampled rapidly by routing them through a programmable diagonal switch array to an on-chip parallel analog memory array. The

  3. Harnessing VLSI System Design with EDA Tools

    CERN Document Server

    Kamat, Rajanish K; Gaikwad, Pawan K; Guhilot, Hansraj

    2012-01-01

    This book explores various dimensions of EDA technologies for achieving different goals in VLSI system design. Although the scope of EDA is very broad and comprises diversified hardware and software tools to accomplish different phases of VLSI system design, such as design, layout, simulation, testability, prototyping and implementation, this book focuses only on demystifying the code, a.k.a. firmware development and its implementation with FPGAs. Since there are a variety of languages for system design, this book covers various issues related to VHDL, Verilog and System C synergized with EDA tools, using a variety of case studies such as testability, verification and power consumption. * Covers aspects of VHDL, Verilog and Handel C in one text; * Enables designers to judge the appropriateness of each EDA tool for relevant applications; * Omits discussion of design platforms and focuses on design case studies; * Uses design case studies from diversified application domains such as network on chip, hospital on...

  4. VLSI 'smart' I/O module development

    Science.gov (United States)

    Kirk, Dan

    The developmental history, design, and operation of the MIL-STD-1553A/B discrete and serial module (DSM) for the U.S. Navy AN/AYK-14(V) avionics computer are described and illustrated with diagrams. The ongoing preplanned product improvement for the AN/AYK-14(V) includes five dual-redundant MIL-STD-1553 channels based on DSMs. The DSM is a front-end processor for transferring data to and from a common memory, sharing memory with a host processor to provide improved 'smart' input/output performance. Each DSM comprises three hardware sections: three VLSI-6000 semicustomized CMOS arrays, memory units to support the arrays, and buffers and resynchronization circuits. The DSM hardware module design, VLSI-6000 design tools, controlware and test software, and checkout procedures (using a hardware simulator) are characterized in detail.

  5. Parallel computation of nondeterministic algorithms in VLSI

    Energy Technology Data Exchange (ETDEWEB)

    Hortensius, P D

    1987-01-01

    This work examines parallel VLSI implementations of nondeterministic algorithms. It is demonstrated that conventional pseudorandom number generators are unsuitable for highly parallel applications. Efficient parallel pseudorandom sequence generation can be accomplished using certain classes of elementary one-dimensional cellular automata. The pseudorandom numbers appear in parallel on each clock cycle. Extensive study of the properties of these new pseudorandom number generators is made using standard empirical random number tests, cycle length tests, and implementation considerations. Furthermore, it is shown these particular cellular automata can form the basis of efficient VLSI architectures for computations involved in the Monte Carlo simulation of both the percolation and Ising models from statistical mechanics. Finally, a variation on a Built-In Self-Test technique based upon cellular automata is presented. These Cellular Automata-Logic-Block-Observation (CALBO) circuits improve upon conventional design for testability circuitry.

  6. Applications of VLSI circuits to medical imaging

    International Nuclear Information System (INIS)

    O'Donnell, M.

    1988-01-01

    In this paper the application of advanced VLSI circuits to medical imaging is explored. The relationship of both general purpose signal processing chips and custom devices to medical imaging is discussed using examples of fabricated chips. In addition, advanced CAD tools for silicon compilation are presented. Devices built with these tools represent a possible alternative to custom devices and general purpose signal processors for the next generation of medical imaging systems

  7. First results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Anzivino, G.; Horisberger, R.; Hubbeling, L.; Hyams, B.; Parker, S.; Breakstone, A.; Litke, A.M.; Walker, J.T.; Bingefors, N.

    1986-01-01

    A 256-strip silicon detector with 25 μm strip pitch, connected to two 128-channel NMOS VLSI chips (Microplex), has been tested using straight-through tracks from a ruthenium beta source. The readout channels have a pitch of 47.5 μm. A single multiplexed output provides voltages proportional to the integrated charge from each strip. The most probable signal height from the beta traversals is approximately 14 times the rms noise in any single channel. (orig.)

  8. Large scale integration of photovoltaics in cities

    International Nuclear Information System (INIS)

    Strzalka, Aneta; Alam, Nazmul; Duminil, Eric; Coors, Volker; Eicker, Ursula

    2012-01-01

    Highlights: ► We implement the photovoltaics on a large scale. ► We use three-dimensional modelling for accurate photovoltaic simulations. ► We consider the shadowing effect in the photovoltaic simulation. ► We validate the simulated results using detailed hourly measured data. - Abstract: For a large scale implementation of photovoltaics (PV) in the urban environment, building integration is a major issue. This includes installations on roof or facade surfaces with orientations that are not ideal for maximum energy production. To evaluate the performance of PV systems in urban settings and compare it with the building user’s electricity consumption, three-dimensional geometry modelling was combined with photovoltaic system simulations. As an example, the modern residential district of Scharnhauser Park (SHP) near Stuttgart/Germany was used to calculate the potential of photovoltaic energy and to evaluate the local own consumption of the energy produced. For most buildings of the district only annual electrical consumption data was available and only selected buildings have electronic metering equipment. The available roof area for one of these multi-family case study buildings was used for a detailed hourly simulation of the PV power production, which was then compared to the hourly measured electricity consumption. The results were extrapolated to all buildings of the analyzed area by normalizing them to the annual consumption data. The PV systems can produce 35% of the quarter’s total electricity consumption and half of this generated electricity is directly used within the buildings.

  9. Towards an Analogue Neuromorphic VLSI Instrument for the Sensing of Complex Odours

    Science.gov (United States)

    Ab Aziz, Muhammad Fazli; Harun, Fauzan Khairi Che; Covington, James A.; Gardner, Julian W.

    2011-09-01

    Almost all electronic nose instruments reported today employ pattern recognition algorithms written in software and run on digital processors, e.g. micro-processors, microcontrollers or FPGAs. Conversely, in this paper we describe the analogue VLSI implementation of an electronic nose through the design of a neuromorphic olfactory chip. The modelling, design and fabrication of the chip have already been reported. Here a smart interface has been designed and characterised for thisneuromorphic chip. Thus we can demonstrate the functionality of the a VLSI neuromorphic chip, producing differing principal neuron firing patterns to real sensor response data. Further work is directed towards integrating 9 separate neuromorphic chips to create a large neuronal network to solve more complex olfactory problems.

  10. Robust working memory in an asynchronously spiking neural network realized in neuromorphic VLSI

    Directory of Open Access Journals (Sweden)

    Massimiliano eGiulioni

    2012-02-01

    Full Text Available We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory of integrate-and-fire (LIF neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of ‘high’ and ‘low’-firing activity. Depending on the overall excitability, transitions to the ‘high’ state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the ‘high’ state retains a working memory of a stimulus until well after its release. In the latter case, ‘high’ states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated corrupted ‘high’ states comprising neurons of both excitatory populations. Within a basin of attraction, the network dynamics corrects such states and re-establishes the prototypical ‘high’ state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  11. Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI.

    Science.gov (United States)

    Giulioni, Massimiliano; Camilleri, Patrick; Mattia, Maurizio; Dante, Vittorio; Braun, Jochen; Del Giudice, Paolo

    2011-01-01

    We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of "high" and "low"-firing activity. Depending on the overall excitability, transitions to the "high" state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the "high" state retains a "working memory" of a stimulus until well after its release. In the latter case, "high" states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated "corrupted" "high" states comprising neurons of both excitatory populations. Within a "basin of attraction," the network dynamics "corrects" such states and re-establishes the prototypical "high" state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  12. A neuromorphic VLSI device for implementing 2-D selective attention systems.

    Science.gov (United States)

    Indiveri, G

    2001-01-01

    Selective attention is a mechanism used to sequentially select and process salient subregions of the input space, while suppressing inputs arriving from nonsalient regions. By processing small amounts of sensory information in a serial fashion, rather than attempting to process all the sensory data in parallel, this mechanism overcomes the problem of flooding limited processing capacity systems with sensory inputs. It is found in many biological systems and can be a useful engineering tool for developing artificial systems that need to process in real-time sensory data. In this paper we present a neuromorphic hardware model of a selective attention mechanism implemented on a very large scale integration (VLSI) chip, using analog circuits. The chip makes use of a spike-based representation for receiving input signals, transmitting output signals and for shifting the selection of the attended input stimulus over time. It can be interfaced to neuromorphic sensors and actuators, for implementing multichip selective attention systems. We describe the characteristics of the circuits used in the architecture and present experimental data measured from the system.

  13. Implementation of neuromorphic systems: from discrete components to analog VLSI chips (testing and communication issues).

    Science.gov (United States)

    Dante, V; Del Giudice, P; Mattia, M

    2001-01-01

    We review a series of implementations of electronic devices aiming at imitating to some extent structure and function of simple neural systems, with particular emphasis on communication issues. We first provide a short overview of general features of such "neuromorphic" devices and the implications of setting up "tests" for them. We then review the developments directly related to our work at the Istituto Superiore di Sanità (ISS): a pilot electronic neural network implementing a simple classifier, autonomously developing internal representations of incoming stimuli; an output network, collecting information from the previous classifier and extracting the relevant part to be forwarded to the observer; an analog, VLSI (very large scale integration) neural chip implementing a recurrent network of spiking neurons and plastic synapses, and the test setup for it; a board designed to interface the standard PCI (peripheral component interconnect) bus of a PC with a special purpose, asynchronous bus for communication among neuromorphic chips; a short and preliminary account of an application-oriented device, taking advantage of the above communication infrastructure.

  14. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam

    2011-12-01

    Power dissipation poses a great challenge for VLSI designers. With the intense down-scaling of technology, the total power consumption of the chip is made up primarily of leakage power dissipation. This paper proposes combining a custom-designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result of implementing this novel power gating technique, a standby leakage power reduction of 99% and energy savings of 33.3% are achieved. Finally the possible effects of surge currents and ground bounce noise are studied. These findings allow longer operation times for battery-operated systems characterized by long standby periods. © 2011 IEEE.

  15. System-Level Modeling and Synthesis Techniques for Flow-Based Microfluidic Very Large Scale Integration Biochips

    DEFF Research Database (Denmark)

    Minhass, Wajid Hassan

    Microfluidic biochips integrate different biochemical analysis functionalities on-chip and offer several advantages over the conventional biochemical laboratories. In this thesis, we focus on the flow-based biochips. The basic building block of such a chip is a valve which can be fabricated at very...... propose a framework for mapping the biochemical applications onto the mVLSI biochips, binding and scheduling the operations and performing fluid routing. A control synthesis framework for determining the exact valve activation sequence required to execute the application is also proposed. In order...... to reduce the macro-assembly around the chip and enhance chip scalability, we propose an approach for the biochip pin count minimization. We also propose a throughput maximization scheme for the cell culture mVLSI biochips, saving time and reducing costs. We have extensively evaluated the proposed...

  16. Technology computer aided design simulation for VLSI MOSFET

    CERN Document Server

    Sarkar, Chandan Kumar

    2013-01-01

    Responding to recent developments and a growing VLSI circuit manufacturing market, Technology Computer Aided Design: Simulation for VLSI MOSFET examines advanced MOSFET processes and devices through TCAD numerical simulations. The book provides a balanced summary of TCAD and MOSFET basic concepts, equations, physics, and new technologies related to TCAD and MOSFET. A firm grasp of these concepts allows for the design of better models, thus streamlining the design process, saving time and money. This book places emphasis on the importance of modeling and simulations of VLSI MOS transistors and

  17. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  18. ORGANIZATION OF GRAPHIC INFORMATION FOR VIEWING THE MULTILAYER VLSI TOPOLOGY

    Directory of Open Access Journals (Sweden)

    V. I. Romanov

    2016-01-01

    Full Text Available One of the possible ways to reorganize of graphical information describing the set of topology layers of modern VLSI. The method is directed on the use in the conditions of the bounded size of video card memory. An additional effect, providing high performance of forming multi- image layout a multi-layer topology of modern VLSI, is achieved by preloading the required texture by means of auxiliary background process.

  19. Synthesis algorithm of VLSI multipliers for ASIC

    Science.gov (United States)

    Chua, O. H.; Eldin, A. G.

    1993-01-01

    Multipliers are critical sub-blocks in ASIC design, especially for digital signal processing and communications applications. A flexible multiplier synthesis tool is developed which is capable of generating multiplier blocks for word size in the range of 4 to 256 bits. A comparison of existing multiplier algorithms is made in terms of speed, silicon area, and suitability for automated synthesis and verification of its VLSI implementation. The algorithm divides the range of supported word sizes into sub-ranges and provides each sub-range with a specific multiplier architecture for optimal speed and area. The algorithm of the synthesis tool and the multiplier architectures are presented. Circuit implementation and the automated synthesis methodology are discussed.

  20. DPL/Daedalus design environment (for VLSI)

    Energy Technology Data Exchange (ETDEWEB)

    Batali, J; Mayle, N; Shrobe, H; Sussman, G; Weise, D

    1981-01-01

    The DPL/Daedalus design environment is an interactive VLSI design system implemented at the MIT Artificial Intelligence Laboratory. The system consists of several components: a layout language called DPL (for design procedure language); an interactive graphics facility (Daedalus); and several special purpose design procedures for constructing complex artifacts such as PLAs and microprocessor data paths. Coordinating all of these is a generalized property list data base which contains both the data representing circuits and the procedures for constructing them. The authors first review the nature of the data base and then turn to DPL and Daedalus, the two most common ways of entering information into the data base. The next two sections review the specialized procedures for constructing PLAs and data paths; the final section describes a tool for hierarchical node extraction. 5 references.

  1. PLA realizations for VLSI state machines

    Science.gov (United States)

    Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.

    1990-01-01

    A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.

  2. Development methods for VLSI-processors

    International Nuclear Information System (INIS)

    Horninger, K.; Sandweg, G.

    1982-01-01

    The aim of this project, which was originally planed for 3 years, was the development of modern system and circuit concepts, for VLSI-processors having a 32 bit wide data path. The result of this first years work is the concept of a general purpose processor. This processor is not only logically but also physically (on the chip) divided into four functional units: a microprogrammable instruction unit, an execution unit in slice technique, a fully associative cache memory and an I/O unit. For the ALU of the execution unit circuits in PLA and slice techniques have been realized. On the basis of regularity, area consumption and achievable performance the slice technique has been prefered. The designs utilize selftesting circuitry. (orig.) [de

  3. VLSI PARTITIONING ALGORITHM WITH ADAPTIVE CONTROL PARAMETER

    Directory of Open Access Journals (Sweden)

    P. N. Filippenko

    2013-03-01

    Full Text Available The article deals with the problem of very large-scale integration circuit partitioning. A graph is selected as a mathematical model describing integrated circuit. Modification of ant colony optimization algorithm is presented, which is used to solve graph partitioning problem. Ant colony optimization algorithm is an optimization method based on the principles of self-organization and other useful features of the ants’ behavior. The proposed search system is based on ant colony optimization algorithm with the improved method of the initial distribution and dynamic adjustment of the control search parameters. The experimental results and performance comparison show that the proposed method of very large-scale integration circuit partitioning provides the better search performance over other well known algorithms.

  4. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.

    2015-01-01

    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  5. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    National Research Council Canada - National Science Library

    Horiuchi, Timothy K; Krishnaprasad, P. S

    2007-01-01

    .... This includes multiple efforts related to a VLSI-based echolocation system being developed in one of our laboratories from algorithm development, bat flight data analysis, to VLSI circuit design...

  6. An extended Halanay inequality of integral type on time scales

    Directory of Open Access Journals (Sweden)

    Boqun Ou

    2015-07-01

    Full Text Available In this paper, we obtain a Halanay-type inequality of integral type on time scales which improves and extends some earlier results for both the continuous and discrete cases. Several illustrative examples are also given.

  7. Wafer-Scale Integration of Systolic Arrays,

    Science.gov (United States)

    1985-10-01

    hus wtha rbaiith hig robabili, e aubrbe orutysta mostck b(e)adstotoefwsi the cenofther cnnel thati are connted to (g.The kery ato the alevel of t...problems considered heretofore in this paper also have an interpretation in a purely graph theoretic model. Suppose we are given a two-dimensional...graphs," Magyar 7Td. Akad. Math . Kut. Int. Kozl, Vol. 5, 1960, pp. 17-61. [6] D. Fussell and P. Varman, "Fault-tolerant wafer-scale architectures for

  8. Monolithic active pixel sensors (MAPS) in a VLSI CMOS technology

    CERN Document Server

    Turchetta, R; Manolopoulos, S; Tyndel, M; Allport, P P; Bates, R; O'Shea, V; Hall, G; Raymond, M

    2003-01-01

    Monolithic Active Pixel Sensors (MAPS) designed in a standard VLSI CMOS technology have recently been proposed as a compact pixel detector for the detection of high-energy charged particle in vertex/tracking applications. MAPS, also named CMOS sensors, are already extensively used in visible light applications. With respect to other competing imaging technologies, CMOS sensors have several potential advantages in terms of low cost, low power, lower noise at higher speed, random access of pixels which allows windowing of region of interest, ability to integrate several functions on the same chip. This brings altogether to the concept of 'camera-on-a-chip'. In this paper, we review the use of CMOS sensors for particle physics and we analyse their performances in term of the efficiency (fill factor), signal generation, noise, readout speed and sensor area. In most of high-energy physics applications, data reduction is needed in the sensor at an early stage of the data processing before transfer of the data to ta...

  9. Initial beam test results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Adolphsen, C.; Litke, A.; Schwarz, A.

    1986-01-01

    Silicon detectors with 256 strips, having a pitch of 25 μm, and connected to two 128 channel NMOS VLSI chips each (Microplex), have been tested in relativistic charged particle beams at CERN and at the Stanford Linear Accelerator Center. The readout chips have an input channel pitch of 47.5 μm and a single multiplexed output which provides voltages proportional to the integrated charge from each strip. The most probable signal height from minimum ionizing tracks was 15 times the rms noise in any single channel. Two-track traversals with a separation of 100 μm were cleanly resolved

  10. Digital Systems Validation Handbook. Volume 2. Chapter 18. Avionic Data Bus Integration Technology

    Science.gov (United States)

    1993-11-01

    interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion software, which make up digital...1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error detection and...formulate all the significant behavior of a system. MULTIVERSION PROGRAMMING. N-version programming. N-VERSION PROGRAMMING. The independent coding of a

  11. Wafer scale integration of catalyst dots into nonplanar microsystems

    DEFF Research Database (Denmark)

    Gjerde, Kjetil; Kjelstrup-Hansen, Jakob; Gammelgaard, Lauge

    2007-01-01

    In order to successfully integrate bottom-up fabricated nanostructures such as carbon nanotubes or silicon, germanium, or III-V nanowires into microelectromechanical systems on a wafer scale, reliable ways of integrating catalyst dots are needed. Here, four methods for integrating sub-100-nm...... diameter nickel catalyst dots on a wafer scale are presented and compared. Three of the methods are based on a p-Si layer utilized as an in situ mask, an encapsulating layer, and a sacrificial window mask, respectively. All methods enable precise positioning of nickel catalyst dots at the end...

  12. An electron undulating ring for VLSI lithography

    International Nuclear Information System (INIS)

    Tomimasu, T.; Mikado, T.; Noguchi, T.; Sugiyama, S.; Yamazaki, T.

    1985-01-01

    The development of the ETL storage ring ''TERAS'' as an undulating ring has been continued to achieve a wide area exposure of synchrotron radiation (SR) in VLSI lithography. Stable vertical and horizontal undulating motions of stored beams are demonstrated around a horizontal design orbit of TERAS, using two small steering magnets of which one is used for vertical undulating and another for horizontal one. Each steering magnet is inserted into one of the periodic configulation of guide field elements. As one of useful applications of undulaing electron beams, a vertically wide exposure of SR has been demonstrated in the SR lithography. The maximum vertical deviation from the design orbit nCcurs near the steering magnet. The maximum vertical tilt angle of the undulating beam near the nodes is about + or - 2mrad for a steering magnetic field of 50 gauss. Another proposal is for hith-intensity, uniform and wide exposure of SR from a wiggler installed in TERAS, using vertical and horizontal undulating motions of stored beams. A 1.4 m long permanent magnet wiggler has been installed for this purpose in this April

  13. Convolving optically addressed VLSI liquid crystal SLM

    Science.gov (United States)

    Jared, David A.; Stirk, Charles W.

    1994-03-01

    We designed, fabricated, and tested an optically addressed spatial light modulator (SLM) that performs a 3 X 3 kernel image convolution using ferroelectric liquid crystal on VLSI technology. The chip contains a 16 X 16 array of current-mirror-based convolvers with a fixed kernel for finding edges. The pixels are located on 75 micron centers, and the modulators are 20 microns on a side. The array successfully enhanced edges in illumination patterns. We developed a high-level simulation tool (CON) for analyzing the performance of convolving SLM designs. CON has a graphical interface and simulates SLM functions using SPICE-like device models. The user specifies the pixel function along with the device parameters and nonuniformities. We discovered through analysis, simulation and experiment that the operation of current-mirror-based convolver pixels is degraded at low light levels by the variation of transistor threshold voltages inherent to CMOS chips. To function acceptable, the test SLM required the input image to have an minimum irradiance of 10 (mu) W/cm2. The minimum required irradiance can be further reduced by adding a photodarlington near the photodetector or by increasing the size of the transistors used to calculate the convolution.

  14. Handbook of VLSI chip design and expert systems

    CERN Document Server

    Schwarz, A F

    1993-01-01

    Handbook of VLSI Chip Design and Expert Systems provides information pertinent to the fundamental aspects of expert systems, which provides a knowledge-based approach to problem solving. This book discusses the use of expert systems in every possible subtask of VLSI chip design as well as in the interrelations between the subtasks.Organized into nine chapters, this book begins with an overview of design automation, which can be identified as Computer-Aided Design of Circuits and Systems (CADCAS). This text then presents the progress in artificial intelligence, with emphasis on expert systems.

  15. Evaluation of scaling concepts for integral system test facilities

    International Nuclear Information System (INIS)

    Condie, K.G.; Larson, T.K.; Davis, C.B.

    1987-01-01

    A study was conducted by EG and G Idaho, Inc., to identify and technically evaluate potential concepts which will allow the U.S. Nuclear Regulatory Commission to maintain the capability to conduct future integral, thermal-hydraulic facility experiments of interest to light water reactor safety. This paper summarizes the methodology used in the study and presents a rankings for each facility concept relative to its ability to simulate phenomena identified as important in selected reactor transients in Babcock and Wilcox and Westinghouse large pressurized water reactors. Established scaling methodologies are used to develop potential concepts for scaled integral thermal-hydraulic experiment facilities. Concepts selected included: full height, full pressure water; reduced height, reduced pressure water; reduced height, full pressure water; one-tenth linear, full pressure water; and reduced height, full scaled pressure Freon. Results from this study suggest that a facility capable of operating at typical reactor operating conditions will scale most phenomena reasonably well. Local heat transfer phenomena is best scaled by the full height facility, while the reduced height facilities provide better scaling where multi-dimensional phenomena are considered important. Although many phenomena in facilities using Freon or water at nontypical pressure will scale reasonably well, those phenomena which are heavily dependent on quality can be distorted. Furthermore, relation of data produced in facilities operating with nontypical fluids or at nontypical pressures to large plants will be a difficult and time-consuming process

  16. The Integrative Psychotherapy Alliance: Family, Couple and Individual Therapy Scales.

    Science.gov (United States)

    Pinsof, William M.; Catherall, Donald R.

    1986-01-01

    Presents an integrative definition of the therapeutic alliance that conceptualizes individual, couple and family therapy as occurring within the same systemic framework. The implications of this concept for therapy reserach are examined. Three new systematically oriented scales to measure the alliance are presented along with some preliminary data…

  17. The Genome-Scale Integrated Networks in Microorganisms

    Directory of Open Access Journals (Sweden)

    Tong Hao

    2018-02-01

    Full Text Available The genome-scale cellular network has become a necessary tool in the systematic analysis of microbes. In a cell, there are several layers (i.e., types of the molecular networks, for example, genome-scale metabolic network (GMN, transcriptional regulatory network (TRN, and signal transduction network (STN. It has been realized that the limitation and inaccuracy of the prediction exist just using only a single-layer network. Therefore, the integrated network constructed based on the networks of the three types attracts more interests. The function of a biological process in living cells is usually performed by the interaction of biological components. Therefore, it is necessary to integrate and analyze all the related components at the systems level for the comprehensively and correctly realizing the physiological function in living organisms. In this review, we discussed three representative genome-scale cellular networks: GMN, TRN, and STN, representing different levels (i.e., metabolism, gene regulation, and cellular signaling of a cell’s activities. Furthermore, we discussed the integration of the networks of the three types. With more understanding on the complexity of microbial cells, the development of integrated network has become an inevitable trend in analyzing genome-scale cellular networks of microorganisms.

  18. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    Science.gov (United States)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  19. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  20. Numerical analysis of electromigration in thin film VLSI interconnections

    NARCIS (Netherlands)

    Petrescu, V.; Mouthaan, A.J.; Schoenmaker, W.; Angelescu, S.; Vissarion, R.; Dima, G.; Wallinga, Hans; Profirescu, M.D.

    1995-01-01

    Due to the continuing downscaling of the dimensions in VLSI circuits, electromigration is becoming a serious reliability hazard. A software tool based on finite element analysis has been developed to solve the two partial differential equations of the two particle vacancy/imperfection model.

  1. Built-in self-repair of VLSI memories employing neural nets

    Science.gov (United States)

    Mazumder, Pinaki

    1998-10-01

    The decades of the Eighties and the Nineties have witnessed the spectacular growth of VLSI technology, when the chip size has increased from a few hundred devices to a staggering multi-millon transistors. This trend is expected to continue as the CMOS feature size progresses towards the nanometric dimension of 100 nm and less. SIA roadmap projects that, where as the DRAM chips will integrate over 20 billion devices in the next millennium, the future microprocessors may incorporate over 100 million transistors on a single chip. As the VLSI chip size increase, the limited accessibility of circuit components poses great difficulty for external diagnosis and replacement in the presence of faulty components. For this reason, extensive work has been done in built-in self-test techniques, but little research is known concerning built-in self-repair. Moreover, the extra hardware introduced by conventional fault-tolerance techniques is also likely to become faulty, therefore causing the circuit to be useless. This research demonstrates the feasibility of implementing electronic neural networks as intelligent hardware for memory array repair. Most importantly, we show that the neural network control possesses a robust and degradable computing capability under various fault conditions. Overall, a yield analysis performed on 64K DRAM's shows that the yield can be improved from as low as 20 percent to near 99 percent due to the self-repair design, with overhead no more than 7 percent.

  2. Multidimensional quantum entanglement with large-scale integrated optics

    DEFF Research Database (Denmark)

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong

    2018-01-01

    -dimensional entanglement. A programmable bipartite entangled system is realized with dimension up to 15 × 15 on a large-scale silicon-photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality......The ability to control multidimensional quantum systems is key for the investigation of fundamental science and for the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control and analyze high...

  3. Supply chain integration scales validation and benchmark values

    Directory of Open Access Journals (Sweden)

    Juan A. Marin-Garcia

    2013-06-01

    Full Text Available Purpose: The clarification of the constructs of the supply chain integration (clients, suppliers, external and internal, the creation of a measurement instrument based on a list of items taken from earlier papers, the validation of these scales and a preliminary benchmark to interpret the scales by percentiles based on a set of control variables (size of the plant, country, sector and degree of vertical integration. Design/methodology/approach: Our empirical analysis is based on the HPM project database (2005-2007 timeframe. The international sample is made up of 266 plants across ten countries: Austria, Canada, Finland, Germany, Italy, Japan, Korea, Spain, Sweden and the USA. In each country. We analized the descriptive statistics, internal consistency testing to purify the items (inter-item correlations, Cronbach’s alpha, squared multiple correlation, corrected item-total correlation, exploratory factor analysis, and finally, a confirmatory factor analysis to check the convergent and discriminant validity of the scales. The analyses will be done with the SPSS and EQS programme using the maximum likelihood parameter estimation method. Findings: The four proposed scales show excellent psychometric properties. Research limitations/implications: with a clearer and more concise designation of the supply chain integration measurement scales more reliable and accurate data could be taken to analyse the relations between these constructs with other variables of interest to the academic l fields. Practical implications: providing scales that are valid as a diagnostic tool for best practices, as well as providing a benchmark with which to compare the score for each individual plant against a collection of industrial companies from the machinery, electronics and transportation sectors. Originality/value: supply chain integration may be a major factor in explaining the performance of companies. The results are nevertheless inconclusive, the vast range

  4. HMC algorithm with multiple time scale integration and mass preconditioning

    Science.gov (United States)

    Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.

    2006-01-01

    We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.

  5. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik

    2013-01-01

    with this imbalance and to reduce its high dependence on oil production. For this reason, it is interesting to analyse the extent to which transport electrification can further the renewable energy integration. This paper quantifies this issue in Inner Mongolia, where the share of wind power in the electricity supply...... was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power...... integration by 8%. The application of EVs benefits from saving both energy system cost and fuel cost. However, the negative consequences of decreasing energy system efficiency and increasing the CO2 emission should be noted when applying the hydrogen fuel cell vehicle (HFCV). The results also indicate...

  6. Pursuit, Avoidance, and Cohesion in Flight: Multi-Purpose Control Laws and Neuromorphic VLSI

    Science.gov (United States)

    2010-10-01

    spatial navigation in mammals. We have designed, fabricated, and are now testing a neuromorphic VLSI chip that implements a spike-based, attractor...Control Laws and Neuromorphic VLSI 5a. CONTRACT NUMBER 070402-7705 5b. GRANT NUMBER FA9550-07-1-0446 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...implementations (custom Neuromorphic VLSI and robotics) we will apply important practical constraints that can lead to deeper insight into how and why efficient

  7. Integrative taxonomy for continental-scale terrestrial insect observations.

    Directory of Open Access Journals (Sweden)

    Cara M Gibson

    Full Text Available Although 21(st century ecology uses unprecedented technology at the largest spatio-temporal scales in history, the data remain reliant on sound taxonomic practices that derive from 18(th century science. The importance of accurate species identifications has been assessed repeatedly and in instances where inappropriate assignments have been made there have been costly consequences. The National Ecological Observatory Network (NEON will use a standardized system based upon an integrative taxonomic foundation to conduct observations of the focal terrestrial insect taxa, ground beetles and mosquitoes, at the continental scale for a 30 year monitoring program. The use of molecular data for continental-scale, multi-decadal research conducted by a geographically widely distributed set of researchers has not been evaluated until this point. The current paper addresses the development of a reference library for verifying species identifications at NEON and the key ways in which this resource will enhance a variety of user communities.

  8. Integrative Taxonomy for Continental-Scale Terrestrial Insect Observations

    Science.gov (United States)

    Gibson, Cara M.; Kao, Rebecca H.; Blevins, Kali K.; Travers, Patrick D.

    2012-01-01

    Although 21st century ecology uses unprecedented technology at the largest spatio-temporal scales in history, the data remain reliant on sound taxonomic practices that derive from 18th century science. The importance of accurate species identifications has been assessed repeatedly and in instances where inappropriate assignments have been made there have been costly consequences. The National Ecological Observatory Network (NEON) will use a standardized system based upon an integrative taxonomic foundation to conduct observations of the focal terrestrial insect taxa, ground beetles and mosquitoes, at the continental scale for a 30 year monitoring program. The use of molecular data for continental-scale, multi-decadal research conducted by a geographically widely distributed set of researchers has not been evaluated until this point. The current paper addresses the development of a reference library for verifying species identifications at NEON and the key ways in which this resource will enhance a variety of user communities. PMID:22666362

  9. Scaling analysis for a Savannah River reactor scaled model integral system

    International Nuclear Information System (INIS)

    Boucher, T.J.; Larson, T.K.; McCreery, G.E.; Anderson, J.L.

    1990-11-01

    801The Savannah River Laboratory has requested that the Idaho National Engineering Laboratory perform an analysis to help define, examine, and assess potential concepts for the design of a scaled integral hydraulics test facility representative of the current Savannah River Plant reactor design. In this report the thermal-hydraulic phenomena of importance (based on the knowledge and experience of the authors and the results of the joint INEL/TPG/SRL phenomena identification and ranking effort) to reactor safety during the design basis loss-of-coolant accident were examined and identified. Established scaling methodologies were used to develop potential concepts for integral hydraulic testing facilities. Analysis is conducted to examine the scaling of various phenomena in each of the selected concepts. Results generally support that a one-fourth (1/4) linear scale visual facility capable of operating at pressures up to 350 kPa (51 psia) and temperatures up to 330 K (134 degree F) will scale most hydraulic phenomena reasonably well. However, additional research will be necessary to determine the most appropriate method of simulating several of the reactor components, since the scaling methodology allows for several approaches which may only be assessed via appropriate research. 34 refs., 20 figs., 14 tabs

  10. Trace-based post-silicon validation for VLSI circuits

    CERN Document Server

    Liu, Xiao

    2014-01-01

    This book first provides a comprehensive coverage of state-of-the-art validation solutions based on real-time signal tracing to guarantee the correctness of VLSI circuits.  The authors discuss several key challenges in post-silicon validation and provide automated solutions that are systematic and cost-effective.  A series of automatic tracing solutions and innovative design for debug (DfD) techniques are described, including techniques for trace signal selection for enhancing visibility of functional errors, a multiplexed signal tracing strategy for improving functional error detection, a tracing solution for debugging electrical errors, an interconnection fabric for increasing data bandwidth and supporting multi-core debug, an interconnection fabric design and optimization technique to increase transfer flexibility and a DfD design and associated tracing solution for improving debug efficiency and expanding tracing window. The solutions presented in this book improve the validation quality of VLSI circuit...

  11. Embedded Processor Based Automatic Temperature Control of VLSI Chips

    Directory of Open Access Journals (Sweden)

    Narasimha Murthy Yayavaram

    2009-01-01

    Full Text Available This paper presents embedded processor based automatic temperature control of VLSI chips, using temperature sensor LM35 and ARM processor LPC2378. Due to the very high packing density, VLSI chips get heated very soon and if not cooled properly, the performance is very much affected. In the present work, the sensor which is kept very near proximity to the IC will sense the temperature and the speed of the fan arranged near to the IC is controlled based on the PWM signal generated by the ARM processor. A buzzer is also provided with the hardware, to indicate either the failure of the fan or overheating of the IC. The entire process is achieved by developing a suitable embedded C program.

  12. Integration test of ITER full-scale vacuum vessel sector

    International Nuclear Information System (INIS)

    Nakahira, M.; Koizumi, K.; Oka, K.

    2001-01-01

    The full-scale Sector Model Project, which was initiated in 1995 as one of the Large Seven R and D Projects, completed all R and D activities planned in the ITER-EDA period with the joint effort of the ITER Joint Central Team (JCT), the Japanese, the Russian Federation (RF) and the United States (US) Home Teams. The fabrication of a full-scale 18 toroidal sector, which is composed of two 9 sectors spliced at the port center, was successfully completed in September 1997 with the dimensional accuracy of ± 3 mm for the total height and total width. Both sectors were shipped to the test site in JAERI and the integration test was begun in October 1997. The integration test involves the adjustment of field joints, automatic Narrow Gap Tungsten Inert Gas (NG-TIG) welding of field joints with splice plates, and inspection of the joint by ultrasonic testing (UT), which are required for the initial assembly of ITER vacuum vessel. This first demonstration of field joint welding and performance test on the mechanical characteristics were completed in May 1998 and the all results obtained have satisfied the ITER design. In addition to these tests, the integration with the mid plane port extension fabricated by the Russian Home Team, and the cutting and re-welding test of field joints by using full-remotized welding and cutting system developed by the US Home Team, are planned as post EDA activities. (author)

  13. Integration test of ITER full-scale vacuum vessel sector

    International Nuclear Information System (INIS)

    Nakahira, M.; Koizumi, K.; Oka, K.

    1999-01-01

    The full-scale Sector Model Project, which was initiated in 1995 as one of the Large Seven ITER R and D Projects, completed all R and D activities planned in the ITER-EDA period with the joint effort of the ITER Joint Central Team (JCT), the Japanese, the Russian Federation (RF) and the United States (US) Home Teams. The fabrication of a full-scale 18 toroidal sector, which is composed of two 9 sectors spliced at the port center, was successfully completed in September 1997 with the dimensional accuracy of - 3 mm for the total height and total width. Both sectors were shipped to the test site in JAERI and the integration test was begun in October 1997. The integration test involves the adjustment of field joints, automatic Narrow Gap Tungsten Inert Gas (NG-TIG) welding of field joints with splice plates, and inspection of the joint by ultrasonic testing (UT), which are required for the initial assembly of ITER vacuum vessel. This first demonstration of field joint welding and performance test on the mechanical characteristics were completed in May 1998 and the all results obtained have satisfied the ITER design. In addition to these tests, the integration with the mid plane port extension fabricated by the Russian Home Team, and the cutting and re-welding test of field joints by using full-remotized welding and cutting system developed by the US Home Team, are planned as post EDA activities. (author)

  14. Problems of large-scale vertically-integrated aquaculture

    Energy Technology Data Exchange (ETDEWEB)

    Webber, H H; Riordan, P F

    1976-01-01

    The problems of vertically-integrated aquaculture are outlined; they are concerned with: species limitations (in the market, biological and technological); site selection, feed, manpower needs, and legal, institutional and financial requirements. The gaps in understanding of, and the constraints limiting, large-scale aquaculture are listed. Future action is recommended with respect to: types and diversity of species to be cultivated, marketing, biotechnology (seed supply, disease control, water quality and concerted effort), siting, feed, manpower, legal and institutional aids (granting of water rights, grants, tax breaks, duty-free imports, etc.), and adequate financing. The last of hard data based on experience suggests that large-scale vertically-integrated aquaculture is a high risk enterprise, and with the high capital investment required, banks and funding institutions are wary of supporting it. Investment in pilot projects is suggested to demonstrate that large-scale aquaculture can be a fully functional and successful business. Construction and operation of such pilot farms is judged to be in the interests of both the public and private sector.

  15. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  16. The AMchip: A VLSI associative memory for track finding

    International Nuclear Information System (INIS)

    Morsani, F.; Galeotti, S.; Passuello, D.; Amendolia, S.R.; Ristori, L.; Turini, N.

    1992-01-01

    An associative memory to be used for super-fast track finding in future high energy physics experiments, has been implemented on silicon as a full-custom CMOS VLSI chip (the AMchip). The first prototype has been designed and successfully tested at INFN in Pisa. It is implemented in 1.6 μm, double metal, silicon gate CMOS technology and contains about 140 000 MOS transistors on a 1x1 cm 2 silicon chip. (orig.)

  17. Drift chamber tracking with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-10-01

    We have tested a commercial analog VLSI neural network chip for finding in real time the intercept and slope of charged particles traversing a drift chamber. Voltages proportional to the drift times were input to the Intel ETANN chip and the outputs were recorded and later compared off line to conventional track fits. We will discuss the chamber and test setup, the chip specifications, and results of recent tests. We'll briefly discuss possible applications in high energy physics detector triggers

  18. Using Software Technology to Specify Abstract Interfaces in VLSI Design.

    Science.gov (United States)

    1985-01-01

    with the complexity lev- els inherent in VLSI design, in that they can capitalize on their foundations in discrete mathemat- ics and the theory of...basis, rather than globally. Such a partitioning of module semantics makes the specification easier to construct and verify intelectual !y; it also...access function definitions. A standard language improves executability characteristics by capitalizing on portable, optimized system software developed

  19. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    Science.gov (United States)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  20. Multidimensional quantum entanglement with large-scale integrated optics.

    Science.gov (United States)

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G

    2018-04-20

    The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  1. Challenges and options for large scale integration of wind power

    International Nuclear Information System (INIS)

    Tande, John Olav Giaever

    2006-01-01

    Challenges and options for large scale integration of wind power are examined. Immediate challenges are related to weak grids. Assessment of system stability requires numerical simulation. Models are being developed - validation is essential. Coordination of wind and hydro generation is a key for allowing more wind power capacity in areas with limited transmission corridors. For the case study grid depending on technology and control the allowed wind farm size is increased from 50 to 200 MW. The real life example from 8 January 2005 demonstrates that existing marked based mechanisms can handle large amounts of wind power. In wind integration studies it is essential to take account of the controllability of modern wind farms, the power system flexibility and the smoothing effect of geographically dispersed wind farms. Modern wind farms contribute to system adequacy - combining wind and hydro constitutes a win-win system (ml)

  2. Integral criteria for large-scale multiple fingerprint solutions

    Science.gov (United States)

    Ushmaev, Oleg S.; Novikov, Sergey O.

    2004-08-01

    We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.

  3. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-01-01

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193

  4. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm.

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-08-13

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  5. Radiation hardness tests with a demonstrator preamplifier circuit manufactured in silicon on sapphire (SOS) VLSI technology

    International Nuclear Information System (INIS)

    Bingefors, N.; Ekeloef, T.; Eriksson, C.; Paulsson, M.; Moerk, G.; Sjoelund, A.

    1992-01-01

    Samples of the preamplifier circuit, as well as of separate n and p channel transistors of the type contained in the circuit, were irradiated with gammas from a 60 Co source up to an integrated dose of 3 Mrad (30 kGy). The VLSI manufacturing technology used is the SOS4 process of ABB Hafo. A first analysis of the tests shows that the performance of the amplifier remains practically unaffected by the radiation for total doses up to 1 Mrad. At higher doses up to 3 Mrad the circuit amplification factor decreases by a factor between 4 and 5 whereas the output noise level remains unchanged. It is argued that it may be possible to reduce the decrease in amplification factor in future by optimizing the amplifier circuit design further. (orig.)

  6. Operation of a Fast-RICH Prototype with VLSI readout electronics

    Energy Technology Data Exchange (ETDEWEB)

    Guyonnet, J.L. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Arnold, R. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Jobez, J.P. (Coll. de France, 75 - Paris (France)); Seguinot, J. (Coll. de France, 75 - Paris (France)); Ypsilantis, T. (Coll. de France, 75 - Paris (France)); Chesi, E. (CERN / ECP Div., Geneve (Switzerland)); Racz, A. (CERN / ECP Div., Geneve (Switzerland)); Egger, J. (Paul Scherrer Inst., Villigen (Switzerland)); Gabathuler, K. (Paul Scherrer Inst., Villigen (Switzerland)); Joram, C. (Karlsruhe Univ. (Germany)); Adachi, I. (KEK, Tsukuba (Japan)); Enomoto, R. (KEK, Tsukuba (Japan)); Sumiyoshi, T. (KEK, Tsukuba (Japan))

    1994-04-01

    We discuss the first test results, obtained with cosmic rays, of a full-scale Fast-RICH Prototype with proximity-focused 10 mm thick LiF (CaF[sub 2]) solid radiators, TEA as photosensor in CH[sub 4], and readout of 12 x 10[sup 3] cathode pads (5.334 x 6.604 mm[sup 2]) using dedicated VLSI electronics we have developed. The number of detected photoelectrons is 7.7 (6.9) for the CaF[sub 2] (LiF) radiator, very near to the expected values 6.4 (7.5) from Monte Carlo simulations. The single-photon Cherenkov angle resolution [sigma][sub [theta

  7. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    Science.gov (United States)

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights

  8. UK Environmental Prediction - integration and evaluation at the convective scale

    Science.gov (United States)

    Fallmann, Joachim; Lewis, Huw; Castillo, Juan Manuel; Pearson, David; Harris, Chris; Saulter, Andy; Bricheno, Lucy; Blyth, Eleanor

    2016-04-01

    Traditionally, the simulation of regional ocean, wave and atmosphere components of the Earth System have been considered separately, with some information on other components provided by means of boundary or forcing conditions. More recently, the potential value of a more integrated approach, as required for global climate and Earth System prediction, for regional short-term applications has begun to gain increasing research effort. In the UK, this activity is motivated by an understanding that accurate prediction and warning of the impacts of severe weather requires an integrated approach to forecasting. The substantial impacts on individuals, businesses and infrastructure of such events indicate a pressing need to understand better the value that might be delivered through more integrated environmental prediction. To address this need, the Met Office, NERC Centre for Ecology & Hydrology and NERC National Oceanography Centre have begun to develop the foundations of a coupled high resolution probabilistic forecast system for the UK at km-scale. This links together existing model components of the atmosphere, coastal ocean, land surface and hydrology. Our initial focus has been on a 2-year Prototype project to demonstrate the UK coupled prediction concept in research mode. This presentation will provide an update on UK environmental prediction activities. We will present the results from the initial implementation of an atmosphere-land-ocean coupled system, including a new eddy-permitting resolution ocean component, and discuss progress and initial results from further development to integrate wave interactions in this relatively high resolution system. We will discuss future directions and opportunities for collaboration in environmental prediction, and the challenges to realise the potential of integrated regional coupled forecasting for improving predictions and applications.

  9. Las Vegas is better than determinism in VLSI and distributed computing

    DEFF Research Database (Denmark)

    Mehlhorn, Kurt; Schmidt, Erik Meineche

    1982-01-01

    In this paper we describe a new method for proving lower bounds on the complexity of VLSI - computations and more generally distributed computations. Lipton and Sedgewick observed that the crossing sequence arguments used to prove lower bounds in VLSI (or TM or distributed computing) apply to (ac...

  10. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    Science.gov (United States)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  11. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    International Nuclear Information System (INIS)

    Dednam, W; Botha, A E

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  12. Argentinean integrated small reactor design and scale economy analysis of integrated reactor

    International Nuclear Information System (INIS)

    Florido, P. C.; Bergallo, J. E.; Ishida, M. V.

    2000-01-01

    This paper describes the design of CAREM, which is Argentinean integrated small reactor project and the scale economy analysis results of integrated reactor. CAREM project consists on the development, design and construction of a small nuclear power plant. CAREM is an advanced reactor conceived with new generation design solutions and standing on the large experience accumulated in the safe operation of Light Water Reactors. The CAREM is an indirect cycle reactor with some distinctive and characteristic features that greatly simplify the reactor and also contribute to a highly level of safety: integrated primary cooling system, self pressurized, primary cooling by natural circulation and safety system relying on passive features. For a fully doupled economic evaluation of integrated reactors done by IREP (Integrated Reactor Evaluation Program) code transferred to IAEA, CAREM have been used as a reference point. The results shows that integrated reactors become competitive with power larger than 200MWe with Argentinean cheapest electricity option. Due to reactor pressure vessel construction limit, low pressure drop steam generator are used to reach power output of 200MWe for natural circulation. For forced circulation, 300MWe can be achieved. (author)

  13. Fuel pin integrity assessment under large scale transients

    International Nuclear Information System (INIS)

    Dutta, B.K.

    2006-01-01

    The integrity of fuel rods under normal, abnormal and accident conditions is an important consideration during fuel design of advanced nuclear reactors. The fuel matrix and the sheath form the first barrier to prevent the release of radioactive materials into the primary coolant. An understanding of the fuel and clad behaviour under different reactor conditions, particularly under the beyond-design-basis accident scenario leading to large scale transients, is always desirable to assess the inherent safety margins in fuel pin design and to plan for the mitigation the consequences of accidents, if any. The severe accident conditions are typically characterized by the energy deposition rates far exceeding the heat removal capability of the reactor coolant system. This may lead to the clad failure due to fission gas pressure at high temperature, large- scale pellet-clad interaction and clad melting. The fuel rod performance is affected by many interdependent complex phenomena involving extremely complex material behaviour. The versatile experimental database available in this area has led to the development of powerful analytical tools to characterize fuel under extreme scenarios

  14. Properties Important To Mixing For WTP Large Scale Integrated Testing

    International Nuclear Information System (INIS)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-01-01

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  15. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    Energy Technology Data Exchange (ETDEWEB)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  16. Point DCT VLSI Architecture for Emerging HEVC Standard

    OpenAIRE

    Ahmed, Ashfaq; Shahid, Muhammad Usman; Rehman, Ata ur

    2012-01-01

    This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4 × 4 up to 3 2 × 3 2 , the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into ...

  17. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  18. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  19. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Science.gov (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  20. Opto-VLSI-based reconfigurable free-space optical interconnects architecture

    DEFF Research Database (Denmark)

    Aljada, Muhsen; Alameh, Kamal; Chung, Il-Sug

    2007-01-01

    is the Opto-VLSI processor which can be driven by digital phase steering and multicasting holograms that reconfigure the optical interconnects between the input and output ports. The optical interconnects architecture is experimentally demonstrated at 2.5 Gbps using high-speed 1×3 VCSEL array and 1......×3 photoreceiver array in conjunction with two 1×4096 pixel Opto-VLSI processors. The minimisation of the crosstalk between the output ports is achieved by appropriately aligning the VCSEL and PD elements with respect to the Opto-VLSI processors and driving the latter with optimal steering phase holograms....

  1. VLSI architecture and design for the Fermat Number Transform implementation

    Energy Technology Data Exchange (ETDEWEB)

    Pajayakrit, A.

    1987-01-01

    A new technique of sectioning a pipelined transformer, using the Fermat Number Transform (FNT), is introduced. Also, a novel VLSI design which overcomes the problems of implementing FNTs, for use in fast convolution/correlation, is described. The design comprises one complete section of a pipelined transformer and may be programmed to function at any point in a forward or inverse pipeline, so allowing the construction of a pipelined convolver or correlator using identical chips, thus the favorable properties of the transform can be exploited. This overcomes the difficulty of fitting a complete pipeline onto one chip without resorting to the use of several different designs. The implementation of high-speed convolver/correlator using the VLSI chips has been successfully developed and tested. For impulse response lengths of up to 16 points the sampling rates of 0.5 MHz can be achieved. Finally, the filter speed performance using the FNT chips is compared to other designs and conclusions drawn on the merits of the FNT for this application. Also, the advantages and limitations of the FNT are analyzed, with respect to the more conventional FFT, and the results are provided.

  2. Design of two easily-testable VLSI array multipliers

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, J.; Shen, J.P.

    1983-01-01

    Array multipliers are well-suited to VLSI implementation because of the regularity in their iterative structure. However, most VLSI circuits are very difficult to test. This paper shows that, with appropriate cell design, array multipliers can be designed to be very easily testable. An array multiplier is called c-testable if all its adder cells can be exhaustively tested while requiring only a constant number of test patterns. The testability of two well-known array multiplier structures are studied. The conventional design of the carry-save array multipler is shown to be not c-testable. However, a modified design, using a modified adder cell, is generated and shown to be c-testable and requires only 16 test patterns. Similar results are obtained for the baugh-wooley two's complement array multiplier. A modified design of the baugh-wooley array multiplier is shown to be c-testable and requires 55 test patterns. The implementation of a practical c-testable 16*16 array multiplier is also presented. 10 references.

  3. Electricity prices, large-scale renewable integration, and policy implications

    International Nuclear Information System (INIS)

    Kyritsis, Evangelos; Andersson, Jonas; Serletis, Apostolos

    2017-01-01

    This paper investigates the effects of intermittent solar and wind power generation on electricity price formation in Germany. We use daily data from 2010 to 2015, a period with profound modifications in the German electricity market, the most notable being the rapid integration of photovoltaic and wind power sources, as well as the phasing out of nuclear energy. In the context of a GARCH-in-Mean model, we show that both solar and wind power Granger cause electricity prices, that solar power generation reduces the volatility of electricity prices by scaling down the use of peak-load power plants, and that wind power generation increases the volatility of electricity prices by challenging electricity market flexibility. - Highlights: • We model the impact of solar and wind power generation on day-ahead electricity prices. • We discuss the different nature of renewables in relation to market design. • We explore the impact of renewables on the distributional properties of electricity prices. • Solar and wind reduce electricity prices but affect price volatility in the opposite way. • Solar decreases the probability of electricity price spikes, while wind increases it.

  4. Custom VLSI circuits for high energy physics

    International Nuclear Information System (INIS)

    Parker, S.

    1998-06-01

    This article provides a brief guide to integrated circuits, including their design, fabrication, testing, radiation hardness, and packaging. It was requested by the Panel on Instrumentation, Innovation, and Development of the International Committee for Future Accelerators, as one of a series of articles on instrumentation for future experiments. Their original request emphasized a description of available custom circuits and a set of recommendations for future developments. That has been done, but while traps that stop charge in solid-state devices are well known, those that stop physicists trying to develop the devices are not. Several years spent dodging the former and developing the latter made clear the need for a beginner's guide through the maze, and that is the main purpose of this text

  5. Custom VLSI circuits for high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Parker, S. [Univ. of Hawaii, Honolulu, HI (United States)

    1998-06-01

    This article provides a brief guide to integrated circuits, including their design, fabrication, testing, radiation hardness, and packaging. It was requested by the Panel on Instrumentation, Innovation, and Development of the International Committee for Future Accelerators, as one of a series of articles on instrumentation for future experiments. Their original request emphasized a description of available custom circuits and a set of recommendations for future developments. That has been done, but while traps that stop charge in solid-state devices are well known, those that stop physicists trying to develop the devices are not. Several years spent dodging the former and developing the latter made clear the need for a beginner`s guide through the maze, and that is the main purpose of this text.

  6. Large Scale System Safety Integration for Human Rated Space Vehicles

    Science.gov (United States)

    Massie, Michael J.

    2005-12-01

    Since the 1960s man has searched for ways to establish a human presence in space. Unfortunately, the development and operation of human spaceflight vehicles carry significant safety risks that are not always well understood. As a result, the countries with human space programs have felt the pain of loss of lives in the attempt to develop human space travel systems. Integrated System Safety is a process developed through years of experience (since before Apollo and Soyuz) as a way to assess risks involved in space travel and prevent such losses. The intent of Integrated System Safety is to take a look at an entire program and put together all the pieces in such a way that the risks can be identified, understood and dispositioned by program management. This process has many inherent challenges and they need to be explored, understood and addressed.In order to prepare truly integrated analysis safety professionals must gain a level of technical understanding of all of the project's pieces and how they interact. Next, they must find a way to present the analysis so the customer can understand the risks and make decisions about managing them. However, every organization in a large-scale project can have different ideas about what is or is not a hazard, what is or is not an appropriate hazard control, and what is or is not adequate hazard control verification. NASA provides some direction on these topics, but interpretations of those instructions can vary widely.Even more challenging is the fact that every individual/organization involved in a project has different levels of risk tolerance. When the discrete hazard controls of the contracts and agreements cannot be met, additional risk must be accepted. However, when one has left the arena of compliance with the known rules, there can be no longer be specific ground rules on which to base a decision as to what is acceptable and what is not. The integrator must find common grounds between all parties to achieve

  7. Integrated bioenergy conversion concepts for small scale gasification power systems

    Science.gov (United States)

    Aldas, Rizaldo Elauria

    Thermal and biological gasification are promising technologies for addressing the emerging concerns in biomass-based renewable energy, environmental protection and waste management. However, technical barriers such as feedstock quality limitations, tars, and high NOx emissions from biogas fueled engines impact their full utilization and make them suffer at the small scale from the need to purify the raw gas for most downstream processes, including power generation other than direct boiler use. The two separate gasification technologies may be integrated to better address the issues of power generation and waste management and to complement some of each technologies' limitations. This research project investigated the technical feasibility of an integrated thermal and biological gasification concept for parameters critical to appropriately matching an anaerobic digester with a biomass gasifier. Specific studies investigated the thermal gasification characteristics of selected feedstocks in four fixed-bed gasification experiments: (1) updraft gasification of rice hull, (2) indirect-heated gasification of rice hull, (3) updraft gasification of Athel wood, and (4) downdraft gasification of Athel and Eucalyptus woods. The effects of tars and other components of producer gas on anaerobic digestion at mesophilic temperature of 36°C and the biodegradation potentials and soil carbon mineralization of gasification tars during short-term aerobic incubation at 27.5°C were also examined. Experiments brought out the ranges in performance and quality and quantity of gasification products under different operating conditions and showed that within the conditions considered in the study, these gasification products did not adversely impact the overall digester performance. Short-term aerobic incubation demonstrated variable impacts on carbon mineralization depending on tar and soil conditions. Although tars exhibited low biodegradation indices, degradation may be improved if the

  8. Scaling for integral simulation of thermal-hydraulic phenomena in SBWR during LOCA

    Energy Technology Data Exchange (ETDEWEB)

    Ishii, M.; Revankar, S.T.; Dowlati, R [Purdue Univ., West Layfayette, IN (United States)] [and others

    1995-09-01

    A scaling study has been conducted for simulation of thermal-hydraulic phenomena in the Simplified Boiling Water Reactor (SBWR) during a loss of coolant accident. The scaling method consists of a three-level scaling approach. The integral system scaling (global scaling or top down approach) consists of two levels, the integral response function scaling which forms the first level, and the control volume and boundary flow scaling which forms the second level. The bottom up approach is carried out by local phenomena scaling which forms the third level scaling. Based on this scaling study the design of the model facility called Purdue University Multi-Dimensional Integral Test Assembly (PUMA) has been carried out. The PUMA facility has 1/4 height and 1/100 area ratio scaling, corresponding to the volume scaling of 1/400. The PUMA power scaling based on the integral scaling is 1/200. The present scaling method predicts that PUMA time scale will be one-half that of the SBWR. The system pressure for PUMA is full scale, therefore, a prototypic pressure is maintained. PUMA is designed to operate at and below 1.03 MPa (150 psi), which allows it to simulate the prototypic SBWR accident conditions below 1.03 MPa (150 psi). The facility includes models for all components of importance.

  9. Application of evolutionary algorithms for multi-objective optimization in VLSI and embedded systems

    CERN Document Server

    2015-01-01

    This book describes how evolutionary algorithms (EA), including genetic algorithms (GA) and particle swarm optimization (PSO) can be utilized for solving multi-objective optimization problems in the area of embedded and VLSI system design. Many complex engineering optimization problems can be modelled as multi-objective formulations. This book provides an introduction to multi-objective optimization using meta-heuristic algorithms, GA and PSO, and how they can be applied to problems like hardware/software partitioning in embedded systems, circuit partitioning in VLSI, design of operational amplifiers in analog VLSI, design space exploration in high-level synthesis, delay fault testing in VLSI testing, and scheduling in heterogeneous distributed systems. It is shown how, in each case, the various aspects of the EA, namely its representation, and operators like crossover, mutation, etc. can be separately formulated to solve these problems. This book is intended for design engineers and researchers in the field ...

  10. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    Science.gov (United States)

    2007-03-31

    IFinal 03/01/04 - 02/28/07 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Neuromorphic VLSI-based Bat Echolocation for Micro-aerial 5b.GRANTNUMBER Vehicle...uncovered interesting new issues in our choice for representing the intensity of signals. We have just finished testing the first chip version of an echo...timing-based algorithm (’openspace’) for sonar-guided navigation amidst multiple obstacles. 15. SUBJECT TERMS Neuromorphic VLSI, bat echolocation

  11. VLSI and system architecture-the new development of system 5G

    Energy Technology Data Exchange (ETDEWEB)

    Sakamura, K.; Sekino, A.; Kodaka, T.; Uehara, T.; Aiso, H.

    1982-01-01

    A research and development proposal is presented for VLSI CAD systems and for a hardware environment called system 5G on which the VLSI CAD systems run. The proposed CAD systems use a hierarchically organized design language to enable design of anything from basic architectures of VLSI to VLSI mask patterns in a uniform manner. The cad systems will eventually become intelligent cad systems that acquire design knowledge and perform automatic design of VLSI chips when the characteristic requirements of VLSI chip is given. System 5G will consist of superinference machines and the 5G communication network. The superinference machine will be built based on a functionally distributed architecture connecting inferommunication network. The superinference machine will be built based on a functionally distributed architecture connecting inference machines and relational data base machines via a high-speed local network. The transfer rate of the local network will be 100 mbps at the first stage of the project and will be improved to 1 gbps. Remote access to the superinference machine will be possible through the 5G communication network. Access to system 5G will use the 5G network architecture protocol. The users will access the system 5G using standardized 5G personal computers. 5G personal logic programming stations, very high intelligent terminals providing an instruction set that supports predicate logic and input/output facilities for audio and graphical information.

  12. Point DCT VLSI Architecture for Emerging HEVC Standard

    Directory of Open Access Journals (Sweden)

    Ashfaq Ahmed

    2012-01-01

    Full Text Available This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4×4 up to 32×32, the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into sparse submatrices in order to reduce the multiplications. Finally, multiplications are completely eliminated using the lifting scheme. The proposed architecture sustains real-time processing of 1080P HD video codec running at 150 MHz.

  13. PERFORMANCE OF LEAKAGE POWER MINIMIZATION TECHNIQUE FOR CMOS VLSI TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    T. Tharaneeswaran

    2012-06-01

    Full Text Available Leakage power of CMOS VLSI Technology is a great concern. To reduce leakage power in CMOS circuits, a Leakage Power Minimiza-tion Technique (LPMT is implemented in this paper. Leakage cur-rents are monitored and compared. The Comparator kicks the charge pump to give body voltage (Vbody. Simulations of these circuits are done using TSMC 0.35µm technology with various operating temper-atures. Current steering Digital-to-Analog Converter (CSDAC is used as test core to validate the idea. The Test core (eg.8-bit CSDAC had power consumption of 347.63 mW. LPMT circuit alone consumes power of 6.3405 mW. This technique results in reduction of leakage power of 8-bit CSDAC by 5.51mW and increases the reliability of test core. Mentor Graphics ELDO and EZ-wave are used for simulations.

  14. VLSI-based video event triggering for image data compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  15. Carbon nanotube based VLSI interconnects analysis and design

    CERN Document Server

    Kaushik, Brajesh Kumar

    2015-01-01

    The brief primarily focuses on the performance analysis of CNT based interconnects in current research scenario. Different CNT structures are modeled on the basis of transmission line theory. Performance comparison for different CNT structures illustrates that CNTs are more promising than Cu or other materials used in global VLSI interconnects. The brief is organized into five chapters which mainly discuss: (1) an overview of current research scenario and basics of interconnects; (2) unique crystal structures and the basics of physical properties of CNTs, and the production, purification and applications of CNTs; (3) a brief technical review, the geometry and equivalent RLC parameters for different single and bundled CNT structures; (4) a comparative analysis of crosstalk and delay for different single and bundled CNT structures; and (5) various unique mixed CNT bundle structures and their equivalent electrical models.

  16. Integrating climate change into governance at the municipal scale

    DEFF Research Database (Denmark)

    Wejs, Anja

    2014-01-01

    traditions and perceptions. This article examines dif- ferent approaches to CC governance and the institutional dynamics that occur in the integration process within eight Danish municipalities in the initial phase of integrating CC. The results show three different governance approaches related to climate...

  17. The challenge of integrating large scale wind power

    Energy Technology Data Exchange (ETDEWEB)

    Kryszak, B.

    2007-07-01

    The support of renewable energy sources is one of the key issues in current energy policies. The paper presents aspects of the integration of wind power in the electric power system from the perspective of a Transmission System Operator (TSO). Technical, operational and market aspects related to the integration of more than 8000 MW of installed wind power into the Transmission Network of Vattenfall Europe Transmission are discussed, and experiences with the transmission of wind power, wind power prediction, balancing of wind power, power production behaviour and fluctuations are reported. Moreover, issues for wind power integration on a European level will be discussed with the background of a wind power study. (auth)

  18. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Directory of Open Access Journals (Sweden)

    Ying-Lun Chen

    2015-08-01

    Full Text Available A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO, and the feature extraction is carried out by the generalized Hebbian algorithm (GHA. To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  19. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  20. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  1. A novel VLSI processor for high-rate, high resolution spectroscopy

    CERN Document Server

    Pullia, Antonio; Gatti, E; Longoni, A; Buttler, W

    2000-01-01

    A novel time-variant VLSI shaper amplifier, suitable for multi-anode Silicon Drift Detectors or other multi-element solid-state X-ray detection systems, is proposed. The new read-out scheme has been conceived for demanding applications with synchrotron light sources, such as X-ray holography or EXAFS, where both high count-rates and high-energy resolutions are required. The circuit is of the linear time-variant class, accepts randomly distributed events and features: a finite-width (1-10 mu s) quasi-optimal weight function, an ultra-low-level energy discrimination (approx 150 eV), and a full compatibility for monolithic integration in CMOS technology. Its impulse response has a staircase-like shape, but the weight function (which is in general different from the impulse response in time-variant systems) is quasi trapezoidal. The operation principles of the new scheme as well as the first experimental results obtained with a prototype of the circuit are presented and discussed in the work.

  2. A Single Chip VLSI Implementation of a QPSK/SQPSK Demodulator for a VSAT Receiver Station

    Science.gov (United States)

    Kwatra, S. C.; King, Brent

    1995-01-01

    This thesis presents a VLSI implementation of a QPSK/SQPSK demodulator. It is designed to be employed in a VSAT earth station that utilizes the FDMA/TDM link. A single chip architecture is used to enable this chip to be easily employed in the VSAT system. This demodulator contains lowpass filters, integrate and dump units, unique word detectors, a timing recovery unit, a phase recovery unit and a down conversion unit. The design stages start with a functional representation of the system by using the C programming language. Then it progresses into a register based representation using the VHDL language. The layout components are designed based on these VHDL models and simulated. Component generators are developed for the adder, multiplier, read-only memory and serial access memory in order to shorten the design time. These sub-components are then block routed to form the main components of the system. The main components are block routed to form the final demodulator.

  3. Digital VLSI design with Verilog a textbook from Silicon Valley Polytechnic Institute

    CERN Document Server

    Williams, John Michael

    2014-01-01

    This book is structured as a step-by-step course of study along the lines of a VLSI integrated circuit design project.  The entire Verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer-deserializer, including synthesizable PLLs.  The author includes everything an engineer needs for in-depth understanding of the Verilog language:  Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided in the downloadable files that accompany the book.  For readers with access to appropriate electronic design tools, all solutions can be developed, simulated, and synthesized as described in the book.   A partial list of design topics includes design partitioning, hierarchy decomposition, safe coding styles, back annotation, wrapper modules, concurrency, race conditions, assertion-based verification, clock synchronization, and design for test.   A concluding presentation of special topics inclu...

  4. 10 K gate I(2)L and 1 K component analog compatible bipolar VLSI technology - HIT-2

    Science.gov (United States)

    Washio, K.; Watanabe, T.; Okabe, T.; Horie, N.

    1985-02-01

    An advanced analog/digital bipolar VLSI technology that combines on the same chip 2-ns 10 K I(2)L gates with 1 K analog devices is proposed. The new technology, called high-density integration technology-2, is based on a new structure concept that consists of three major techniques: shallow grooved-isolation, I(2)L active layer etching, and I(2)L current gain increase. I(2)L circuits with 80-MHz maximum toggle frequency have developed compatibly with n-p-n transistors having a BV(CE0) of more than 10 V and an f(T) of 5 GHz, and lateral p-n-p transistors having an f(T) of 150 MHz.

  5. Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI.

    Science.gov (United States)

    Yu, T; Sejnowski, T J; Cauwenberghs, G

    2011-10-01

    We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris-Lecar and Hodgkin-Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces.

  6. A chip-scale integrated cavity-electro-optomechanics platform

    DEFF Research Database (Denmark)

    Winger, M.; Blasius, T. D.; Mayer Alegre, T. P.

    2011-01-01

    We present an integrated optomechanical and electromechanical nanocavity, in which a common mechanical degree of freedom is coupled to an ultrahigh-Q photonic crystal defect cavity and an electrical circuit. The system allows for wide-range, fast electrical tuning of the optical nanocavity...... resonances, and for electrical control of optical radiation pressure back-action effects such as mechanical amplification (phonon lasing), cooling, and stiffening. These sort of integrated devices offer a new means to efficiently interconvert weak microwave and optical signals, and are expected to pave...

  7. Sulfur-Iodine Integrated Lab Scale Experiment Development

    Energy Technology Data Exchange (ETDEWEB)

    Russ, Ben

    2011-05-27

    The sulfur-iodine (SI) cycle was deermined to be the best cycle for coupling to a high temperature reactor (HTR) because of its high efficiency and potential for further improvement. The Japanese Atomic Energy Agency (JAEA) has also selected the SI process for further development and has successfully completed bench-scale demonstrations of the SI process at atmospheric pressure. JEA also plans to proceed with pilot-scale demonstrations of the SI process and eventually plans to couple an SI demonstration plant to its High Temperature Test Reactor (HHTR). As part of an international NERI project, GA, SNL, and the Frech Commissariat L'Energie Atomique performed laboratory-scale demonstrations of the SI process at prototypical temperatures and pressures. This demonstration was performed at GA in San Diego, CA and concluded in April 2009.

  8. Sulfur-Iodine Integrated Lab Scale Experiment Development

    International Nuclear Information System (INIS)

    Russ, Ben

    2011-01-01

    The sulfur-iodine (SI) cycle was determined to be the best cycle for coupling to a high temperature reactor (HTR) because of its high efficiency and potential for further improvement. The Japanese Atomic Energy Agency (JAEA) has also selected the SI process for further development and has successfully completed bench-scale demonstrations of the SI process at atmospheric pressure. JEA also plans to proceed with pilot-scale demonstrations of the SI process and eventually plans to couple an SI demonstration plant to its High Temperature Test Reactor (HHTR). As part of an international NERI project, GA, SNL, and the Frech Commissariat L'Energie Atomique performed laboratory-scale demonstrations of the SI process at prototypical temperatures and pressures. This demonstration was performed at GA in San Diego, CA and concluded in April 2009.

  9. Large scale grid integration of renewable energy sources

    CERN Document Server

    Moreno-Munoz, Antonio

    2017-01-01

    This book presents comprehensive coverage of the means to integrate renewable power, namely wind and solar power. It looks at new approaches to meet the challenges, such as increasing interconnection capacity among geographical areas, hybridisation of different distributed energy resources and building up demand response capabilities.

  10. Large Scale Integration of Carbon Nanotubes in Microsystems

    DEFF Research Database (Denmark)

    Gjerde, Kjetil

    2007-01-01

    Kulstof nanorør har mange egenskaber der kunne anvendes i kombination med traditionelle mikrosystemer, her især overlegne mekaniske og elektriske egenskaber. I dette arbejde bliver metoder til stor-skala integration av kulstof nanorør i mikrosystemer undersøgt, med henblik på anvendelse som mekan...

  11. Analysis for Large Scale Integration of Electric Vehicles into Power Grids

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Wang, Xiaoru

    2011-01-01

    Electric Vehicles (EVs) provide a significant opportunity for reducing the consumption of fossil energies and the emission of carbon dioxide. With more and more electric vehicles integrated in the power systems, it becomes important to study the effects of EV integration on the power systems......, especially the low and middle voltage level networks. In the paper, the basic structure and characteristics of the electric vehicles are introduced. The possible impacts of large scale integration of electric vehicles on the power systems especially the advantage to the integration of the renewable energies...... are discussed. Finally, the research projects related to the large scale integration of electric vehicles into the power systems are introduced, it will provide reference for large scale integration of Electric Vehicles into power grids....

  12. Symplectic integrators for large scale molecular dynamics simulations: A comparison of several explicit methods

    International Nuclear Information System (INIS)

    Gray, S.K.; Noid, D.W.; Sumpter, B.G.

    1994-01-01

    We test the suitability of a variety of explicit symplectic integrators for molecular dynamics calculations on Hamiltonian systems. These integrators are extremely simple algorithms with low memory requirements, and appear to be well suited for large scale simulations. We first apply all the methods to a simple test case using the ideas of Berendsen and van Gunsteren. We then use the integrators to generate long time trajectories of a 1000 unit polyethylene chain. Calculations are also performed with two popular but nonsymplectic integrators. The most efficient integrators of the set investigated are deduced. We also discuss certain variations on the basic symplectic integration technique

  13. A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Kamleh, Waseem

    2011-01-01

    Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.

  14. Systematic approximation of multi-scale Feynman integrals arXiv

    CERN Document Server

    Borowka, Sophia; Hulme, Daniel

    An algorithm for the systematic analytical approximation of multi-scale Feynman integrals is presented. The algorithm produces algebraic expressions as functions of the kinematical parameters and mass scales appearing in the Feynman integrals, allowing for fast numerical evaluation. The results are valid in all kinematical regions, both above and below thresholds, up to in principle arbitrary orders in the dimensional regulator. The scope of the algorithm is demonstrated by presenting results for selected two-loop three-point and four-point integrals with an internal mass scale that appear in the two-loop amplitudes for Higgs+jet production.

  15. A chip-scale integrated cavity-electro-optomechanics platform.

    Science.gov (United States)

    Winger, M; Blasius, T D; Mayer Alegre, T P; Safavi-Naeini, A H; Meenehan, S; Cohen, J; Stobbe, S; Painter, O

    2011-12-05

    We present an integrated optomechanical and electromechanical nanocavity, in which a common mechanical degree of freedom is coupled to an ultrahigh-Q photonic crystal defect cavity and an electrical circuit. The system allows for wide-range, fast electrical tuning of the optical nanocavity resonances, and for electrical control of optical radiation pressure back-action effects such as mechanical amplification (phonon lasing), cooling, and stiffening. These sort of integrated devices offer a new means to efficiently interconvert weak microwave and optical signals, and are expected to pave the way for a new class of micro-sensors utilizing optomechanical back-action for thermal noise reduction and low-noise optical read-out.

  16. VCSEL Scaling, Laser Integration on Silicon, and Bit Energy

    Science.gov (United States)

    2017-03-01

    especially the laser. Highly compact directly modulated lasers ( DMLs ) have been researched to meet this goal. The most favored technology will likely be...question of which achieves lower bit energy, a DML or a continuous-wave (CW) laser coupled to an integrated modulator. Transceiver suppliers are also...development that can utilize high efficiency DMLs that reach very high modulation speed. Oxide-VCSELs [1] do not yet take full advantage of the

  17. Electricity Prices, Large-Scale Renewable Integration, and Policy Implications

    OpenAIRE

    Kyritsis, Evangelos; Andersson, Jonas; Serletis, Apostolos

    2016-01-01

    This paper investigates the effects of intermittent solar and wind power generation on electricity price formation in Germany. We use daily data from 2010 to 2015, a period with profound modifications in the German electricity market, the most notable being the rapid integration of photovoltaic and wind power sources, as well as the phasing out of nuclear energy. In the context of a GARCH-in-Mean model, we show that both solar and wind power Granger cause electricity prices, that solar power ...

  18. Expected Future Conditions for Secure Power Operation with Large Scale of RES Integration

    International Nuclear Information System (INIS)

    Majstrovic, G.; Majstrovic, M.; Sutlovic, E.

    2015-01-01

    EU energy strategy is strongly focused on the large scale integration of renewable energy sources. The most dominant part here is taken by variable sources - wind power plants. Grid integration of intermittent sources along with keeping the system stable and secure is one of the biggest challenges for the TSOs. This part is often neglected by the energy policy makers, so this paper deals with expected future conditions for secure power system operation with large scale wind integration. It gives an overview of expected wind integration development in EU, as well as expected P/f regulation and control needs. The paper is concluded with several recommendations. (author).

  19. Kwong-Wong-type integral equation on time scales

    Directory of Open Access Journals (Sweden)

    Baoguo Jia

    2011-09-01

    Full Text Available Consider the second-order nonlinear dynamic equation $$ [r(tx^Delta(ho(t]^Delta+p(tf(x(t=0, $$ where $p(t$ is the backward jump operator. We obtain a Kwong-Wong-type integral equation, that is: If $x(t$ is a nonoscillatory solution of the above equation on $[T_0,infty$, then the integral equation $$ frac{r^sigma(tx^Delta(t}{f(x^sigma(t} =P^sigma(t+int^infty_{sigma(t}frac{r^sigma(s [int^1_0f'(x_h(sdh][x^Delta(s]^2}{f(x(s f(x^sigma(s}Delta s $$ is satisfied for $tgeq T_0$, where $P^sigma(t=int^infty_{sigma(t}p(sDelta s$, and $x_h(s=x(s+hmu(sx^Delta(s$. As an application, we show that the superlinear dynamic equation $$ [r(tx^{Delta}(ho(t]^Delta+p(tf(x(t=0, $$ is oscillatory, under certain conditions.

  20. Cost optimization of biofuel production – The impact of scale, integration, transport and supply chain configurations

    NARCIS (Netherlands)

    de Jong, S.A.|info:eu-repo/dai/nl/41200836X; Hoefnagels, E.T.A.|info:eu-repo/dai/nl/313935998; Wetterlund, Elisabeth; Pettersson, Karin; Faaij, André; Junginger, H.M.|info:eu-repo/dai/nl/202130703

    2017-01-01

    This study uses a geographically-explicit cost optimization model to analyze the impact of and interrelation between four cost reduction strategies for biofuel production: economies of scale, intermodal transport, integration with existing industries, and distributed supply chain configurations

  1. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob

    2013-01-01

    necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...... optimization decision. A multi-objective PSO (MOPSO) algorithm was introduced to solve this optimization problem, which can accelerate the convergence and guarantee the diversity of Pareto-optimal front set as well. The feasibility and effectiveness of the proposed bi-level planning approach and the MOPSO...

  2. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  3. CAPCAL, 3-D Capacitance Calculator for VLSI Purposes

    International Nuclear Information System (INIS)

    Seidl, Albert; Klose, Helmut; Svoboda, Mildos

    2004-01-01

    1 - Description of program or function: CAPCAL is devoted to the calculation of capacitances of three-dimensional wiring configurations are typically used in VLSI circuits. Due to analogies in the mathematical description also conductance and heat transport problems can be treated by CAPCAL. To handle the problem using CAPCAL same approximations have to be applied to the structure under investigation: - the overall geometry has to be confined to a finite domain by using symmetry-properties of the problem - Non-rectangular structures have to be simplified into an artwork of multiple boxes. 2 - Method of solution: The electrical field is described by the Laplace-equation. The differential equation is discretized by using the finite difference method. NEA-1327/01: The linear equation system is solved by using a combined ADI-multigrid method. NEA-1327/04: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. NEA-1327/05: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. 3 - Restrictions on the complexity of the problem: NEA-1327/01: Certain restrictions of use may arise from the dimensioning of arrays. Field lengths are defined via PARAMETER-statements which can easily by modified. If the geometry of the problem is defined such that Neumann boundaries are dominating the convergence of the iterative equation system solver is affected

  4. Design of a Low-Power VLSI Macrocell for Nonlinear Adaptive Video Noise Reduction

    Directory of Open Access Journals (Sweden)

    Sergio Saponara

    2004-09-01

    Full Text Available A VLSI macrocell for edge-preserving video noise reduction is proposed in the paper. It is based on a nonlinear rational filter enhanced by a noise estimator for blind and dynamic adaptation of the filtering parameters to the input signal statistics. The VLSI filter features a modular architecture allowing the extension of both mask size and filtering directions. Both spatial and spatiotemporal algorithms are supported. Simulation results with monochrome test videos prove its efficiency for many noise distributions with PSNR improvements up to 3.8 dB with respect to a nonadaptive solution. The VLSI macrocell has been realized in a 0.18 μm CMOS technology using a standard-cells library; it allows for real-time processing of main video formats, up to 30 fps (frames per second 4CIF, with a power consumption in the order of few mW.

  5. Power System Operation with Large Scale Wind Power Integration

    DEFF Research Database (Denmark)

    Suwannarat, A.; Bak-Jensen, B.; Chen, Z.

    2007-01-01

    to the uncertain nature of wind power. In this paper, proposed models of generations and control system are presented which analyze the deviation of power exchange at the western Danish-German border, taking into account the fluctuating nature of wind power. The performance of the secondary control of the thermal......The Danish power system starts to face problems of integrating thousands megawatts of wind power, which produce in a stochastic behavior due to natural wind fluctuations. With wind power capacities increasing, the Danish Transmission System Operator (TSO) is faced with new challenges related...... power plants and the spinning reserves control from the Combined Heat and Power (CHP) units to achieve active power balance with the increased wind power penetration is presented....

  6. Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale

    Science.gov (United States)

    Canali, L.; Baranowski, Z.; Kothuri, P.

    2017-10-01

    This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.

  7. Pilot scale simulation of cokemaking in integrated steelworks

    Energy Technology Data Exchange (ETDEWEB)

    Mahoney, M.; Andriopoulos, N.; Keating, J.; Loo, C.E.; McGuire, S. [Newcastle Technology Centre, Wallsend (Australia)

    2005-12-01

    Pilot scale coke ovens are widely used to produce coke samples for characterisation and also to assess the coking behaviour of coal blends. The Newcastle Technology Centre of BHP Billiton has built a sophisticated 400 kg oven, which can produce cokes under a range of carefully controlled bulk densities and heating rates. A freely movable heating wall allows the thrust generated at this wall at the different stages of coking oven to be determined. This paper describes comparative work carried out to determine a laboratory stabilisation technique for laboratory cokes. The strength of stabilised cokes are characterised using a number of tumble tests, and correlations between different drum sizes are also given since a major constraint in laboratory testing is the limitation in the mass of sample available. Typical oven wall pressure results, and results obtained from embedded temperature and pressure probes in the charge during coking, are also presented.

  8. Effect of CMOS Technology Scaling on Fully-Integrated Power Supply Efficiency

    OpenAIRE

    Pillonnet , Gaël; Jeanniot , Nicolas

    2016-01-01

    International audience; Integrating a power supply in the same die as the powered circuits is an appropriate solution for granular, fine and fast power management. To allow same-die co-integration, fully integrated DC-DC converters designed in the latest CMOS technologies have been greatly studied by academics and industrialists in the last decade. However, there is little study concerning the effects of the CMOS scaling on these particular circuits. To show the trends, this paper compares th...

  9. Wafer-scale integration of piezoelectric actuation capabilities in nanoelectromechanical systems resonators

    OpenAIRE

    DEZEST, Denis; MATHIEU, Fabrice; MAZENQ, Laurent; SOYER, Caroline; COSTECALDE, Jean; REMIENS, Denis; THOMAS, Olivier; DEÜ, Jean-François; NICU, Liviu

    2013-01-01

    In this work, we demonstrate the integration of piezoelectric actuation means on arrays of nanocantilevers at the wafer scale. We use lead titanate zirconate (PZT) as piezoelectric material mainly because of its excellent actuation properties even when geometrically constrained at extreme scale

  10. The GLUEchip: A custom VLSI chip for detectors readout and associative memories circuits

    International Nuclear Information System (INIS)

    Amendolia, S.R.; Galeotti, S.; Morsani, F.; Passuello, D.; Ristori, L.; Turini, N.

    1993-01-01

    An associative memory full-custom VLSI chip for pattern recognition has been designed and tested in the past years. It's the AMchip, that contains 128 patterns of 60 bits each. To expand the pattern capacity of an Associative Memory bank, the custom VLSI GLUEchip has been developed. The GLUEchip allows the interconnection of up to 16 AMchips or up to 16 GLUEchips: the resulting tree-like structure works like a single AMchip with an output pipelined structure and a pattern capacity increased by a factor 16 for each GLUEchip used

  11. Digital VLSI design with Verilog a textbook from Silicon Valley Technical Institute

    CERN Document Server

    Williams, John

    2008-01-01

    This unique textbook is structured as a step-by-step course of study along the lines of a VLSI IC design project. In a nominal schedule of 12 weeks, two days and about 10 hours per week, the entire verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer - deserializer, including synthesizable PLLs. Digital VLSI Design With Verilog is all an engineer needs for in-depth understanding of the verilog language: Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided on the

  12. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  13. Integrating remotely sensed surface water extent into continental scale hydrology.

    Science.gov (United States)

    Revilla-Romero, Beatriz; Wanders, Niko; Burek, Peter; Salamon, Peter; de Roo, Ad

    2016-12-01

    In hydrological forecasting, data assimilation techniques are employed to improve estimates of initial conditions to update incorrect model states with observational data. However, the limited availability of continuous and up-to-date ground streamflow data is one of the main constraints for large-scale flood forecasting models. This is the first study that assess the impact of assimilating daily remotely sensed surface water extent at a 0.1° × 0.1° spatial resolution derived from the Global Flood Detection System (GFDS) into a global rainfall-runoff including large ungauged areas at the continental spatial scale in Africa and South America. Surface water extent is observed using a range of passive microwave remote sensors. The methodology uses the brightness temperature as water bodies have a lower emissivity. In a time series, the satellite signal is expected to vary with changes in water surface, and anomalies can be correlated with flood events. The Ensemble Kalman Filter (EnKF) is a Monte-Carlo implementation of data assimilation and used here by applying random sampling perturbations to the precipitation inputs to account for uncertainty obtaining ensemble streamflow simulations from the LISFLOOD model. Results of the updated streamflow simulation are compared to baseline simulations, without assimilation of the satellite-derived surface water extent. Validation is done in over 100 in situ river gauges using daily streamflow observations in the African and South American continent over a one year period. Some of the more commonly used metrics in hydrology were calculated: KGE', NSE, PBIAS%, R 2 , RMSE, and VE. Results show that, for example, NSE score improved on 61 out of 101 stations obtaining significant improvements in both the timing and volume of the flow peaks. Whereas the validation at gauges located in lowland jungle obtained poorest performance mainly due to the closed forest influence on the satellite signal retrieval. The conclusion is that

  14. An integrated system for large scale scanning of nuclear emulsions

    Energy Technology Data Exchange (ETDEWEB)

    Bozza, Cristiano, E-mail: kryss@sa.infn.it [University of Salerno and INFN, via Ponte Don Melillo, Fisciano 84084 (Italy); D’Ambrosio, Nicola [Laboratori Nazionali del Gran Sasso, S.S. 17 BIS km 18.910, Assergi (AQ) 67010 (Italy); De Lellis, Giovanni [University of Napoli and INFN, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); De Serio, Marilisa [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); Di Capua, Francesco [INFN Napoli, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); Di Crescenzo, Antonia [University of Napoli and INFN, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); Di Ferdinando, Donato [INFN Bologna, viale B. Pichat 6/2, Bologna 40127 (Italy); Di Marco, Natalia [Laboratori Nazionali del Gran Sasso, S.S. 17 BIS km 18.910, Assergi (AQ) 67010 (Italy); Esposito, Luigi Salvatore [Laboratori Nazionali del Gran Sasso, now at CERN, Geneva (Switzerland); Fini, Rosa Anna [INFN Bari, via E. Orabona 4, Bari 70125 (Italy); Giacomelli, Giorgio [University of Bologna and INFN, viale B. Pichat 6/2, Bologna 40127 (Italy); Grella, Giuseppe [University of Salerno and INFN, via Ponte Don Melillo, Fisciano 84084 (Italy); Ieva, Michela [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); Kose, Umut [INFN Padova, via Marzolo 8, Padova (PD) 35131 (Italy); Longhin, Andrea; Mauri, Nicoletta [INFN Laboratori Nazionali di Frascati, via E. Fermi 40, Frascati (RM) 00044 (Italy); Medinaceli, Eduardo [University of Padova and INFN, via Marzolo 8, Padova (PD) 35131 (Italy); Monacelli, Piero [University of L' Aquila and INFN, via Vetoio Loc. Coppito, L' Aquila (AQ) 67100 (Italy); Muciaccia, Maria Teresa; Pastore, Alessandra [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); and others

    2013-03-01

    The European Scanning System, developed to analyse nuclear emulsions at high speed, has been completed with the development of a high level software infrastructure to automate and support large-scale emulsion scanning. In one year, an average installation is capable of performing data-taking and online analysis on a total surface ranging from few m{sup 2} to tens of m{sup 2}, acquiring many billions of tracks, corresponding to several TB. This paper focuses on the procedures that have been implemented and on their impact on physics measurements. The system proved robust, reliable, fault-tolerant and user-friendly, and seldom needs assistance. A dedicated relational Data Base system is the backbone of the whole infrastructure, storing data themselves and not only catalogues of data files, as in common practice, being a unique case in high-energy physics DAQ systems. The logical organisation of the system is described and a summary is given of the physics measurement that are readily available by automated processing.

  15. An integrated system for large scale scanning of nuclear emulsions

    International Nuclear Information System (INIS)

    Bozza, Cristiano; D’Ambrosio, Nicola; De Lellis, Giovanni; De Serio, Marilisa; Di Capua, Francesco; Di Crescenzo, Antonia; Di Ferdinando, Donato; Di Marco, Natalia; Esposito, Luigi Salvatore; Fini, Rosa Anna; Giacomelli, Giorgio; Grella, Giuseppe; Ieva, Michela; Kose, Umut; Longhin, Andrea; Mauri, Nicoletta; Medinaceli, Eduardo; Monacelli, Piero; Muciaccia, Maria Teresa; Pastore, Alessandra

    2013-01-01

    The European Scanning System, developed to analyse nuclear emulsions at high speed, has been completed with the development of a high level software infrastructure to automate and support large-scale emulsion scanning. In one year, an average installation is capable of performing data-taking and online analysis on a total surface ranging from few m 2 to tens of m 2 , acquiring many billions of tracks, corresponding to several TB. This paper focuses on the procedures that have been implemented and on their impact on physics measurements. The system proved robust, reliable, fault-tolerant and user-friendly, and seldom needs assistance. A dedicated relational Data Base system is the backbone of the whole infrastructure, storing data themselves and not only catalogues of data files, as in common practice, being a unique case in high-energy physics DAQ systems. The logical organisation of the system is described and a summary is given of the physics measurement that are readily available by automated processing

  16. OffshoreDC DC grids for integration of large scale wind power

    DEFF Research Database (Denmark)

    Zeni, Lorenzo; Endegnanew, Atsede Gualu; Stamatiou, Georgios

    The present report summarizes the main findings of the Nordic Energy Research project “DC grids for large scale integration of offshore wind power – OffshoreDC”. The project is been funded by Nordic Energy Research through the TFI programme and was active between 2011 and 2016. The overall...... objective of the project was to drive the development of the VSC based HVDC technology for future large scale offshore grids, supporting a standardised and commercial development of the technology, and improving the opportunities for the technology to support power system integration of large scale offshore...

  17. Towards Flexible Self-powered Micro-scale Integrated Systems

    KAUST Repository

    Rojas, Jhonathan Prieto

    2014-04-01

    Today’s information-centered world leads the ever-increasing consumer demand for more powerful, multifunctional portable devices. Additionally, recent developments on long-lasting energy sources and compliant, flexible systems, have introduced new required features to the portable devices industry. For example, wireless sensor networks are in urgent need of self-sustainable, easy-to-deploy, mobile platforms, wirelessly interconnected and accessible through a cloud computing system. The objective of my doctoral work is to develop integration strategies to effectively fabricate mechanically flexible, energy-independent systems, which could empower sensor networks for a great variety of new exciting applications. The first module, flexible electronics, can be achieved through several techniques and materials. Our main focus is to bring mechanical flexibility to the state-of-the-art high performing silicon-based electronics, with billions of ultra-low power, nano-sized transistors. Therefore, we have developed a low-cost batch fabrication process to transform standard, rigid, mono-crystalline silicon (100) wafer with devices, into a thin (5-20 m), mechanically flexible, optically semi-transparent silicon fabric. Recycling of the remaining wafer is possible, enabling generation of multiple fabrics to ensure lowcost and optimal utilization of the whole substrate. We have shown mono, amorphous and poly-crystalline silicon and silicon dioxide fabrics, featuring industry’s most advanced high-/metal-gate based capacitors and transistors. The second module consists on the development of efficient energy scavenging systems. First, we have identified an innovative and relatively young technology, which can address at the same time two of the main concerns of human kind: water and energy. Microbial fuel cells (MFC) are capable of producing energy out the metabolism of bacteria while treating wastewater. We have developed two micro-liter MFC designs, one with carbon

  18. ADVANCED INTEGRATION OF MULTI-SCALE MECHANICS AND WELDING PROCESS SIMULATION IN WELD INTEGRITY ASSESSMENT

    Energy Technology Data Exchange (ETDEWEB)

    Wilkowski, Gery M.; Rudland, David L.; Shim, Do-Jun; Brust, Frederick W.; Babu, Sundarsanam

    2008-06-30

    The potential to save trillions of BTU’s in energy usage and billions of dollars in cost on an annual basis based on use of higher strength steel in major oil and gas transmission pipeline construction is a compelling opportunity recognized by both the US Department of Energy (DOE). The use of high-strength steels (X100) is expected to result in energy savings across the spectrum, from manufacturing the pipe to transportation and fabrication, including welding of line pipe. Elementary examples of energy savings include more the 25 trillion BTUs saved annually based on lower energy costs to produce the thinner-walled high-strength steel pipe, with the potential for the US part of the Alaskan pipeline alone saving more than 7 trillion BTU in production and much more in transportation and assembling. Annual production, maintenance and installation of just US domestic transmission pipeline is likely to save 5 to 10 times this amount based on current planned and anticipated expansions of oil and gas lines in North America. Among the most important conclusions from these studies were: • While computational weld models to predict residual stress and distortions are well-established and accurate, related microstructure models need improvement. • Fracture Initiation Transition Temperature (FITT) Master Curve properly predicts surface-cracked pipe brittle-to-ductile initiation temperature. It has value in developing Codes and Standards to better correlate full-scale behavior from either CTOD or Charpy test results with the proper temperature shifts from the FITT master curve method. • For stress-based flaw evaluation criteria, the new circumferentially cracked pipe limit-load solution in the 2007 API 1104 Appendix A approach is overly conservative by a factor of 4/π, which has additional implications. . • For strain-based design of girth weld defects, the hoop stress effect is the most significant parameter impacting CTOD-driving force and can increase the crack

  19. The gate oxide integrity of CVD tungsten polycide

    International Nuclear Information System (INIS)

    Wu, N.W.; Su, W.D.; Chang, S.W.; Tseng, M.F.

    1988-01-01

    CVD tungsten polycide has been demonstrated as a good gate material in recent very large scale integration (VLSI) technology. CVD tungsten silicide offers advantages of low resistivity, high temperature stability and good step coverage. On the other hand, the polysilicon underlayer preserves most characteristics of the polysilicon gate and acts as a stress buffer layer to absorb part of the thermal stress origin from the large thermal expansion coefficient of tungsten silicide. Nevertheless, the gate oxide of CVD tungsten polycide is less stable or reliable than that of polysilicon gate. In this paper, the gate oxide integrity of CVD tungsten polycide with various thickness combinations and different thermal processes have been analyzed by several electrical measurements including breakdown yield, breakdown fluence, room temperature TDDB, I-V characteristics, electron traps and interface state density

  20. Stepwise integral scaling method and its application to severe accident phenomena

    International Nuclear Information System (INIS)

    Ishii, M.; Zhang, G.

    1993-10-01

    Severe accidents in light water reactors are characterized by an occurrence of multiphase flow with complicated phase changes, chemical reaction and various bifurcation phenomena. Because of the inherent difficulties associated with full-scale testing, scaled down and simulation experiments are essential part of the severe accident analyses. However, one of the most significant shortcomings in the area is the lack of well-established and reliable scaling method and scaling criteria. In view of this, the stepwise integral scaling method is developed for severe accident analyses. This new scaling method is quite different from the conventional approach. However, its focus on dominant transport mechanisms and use of the integral response of the system make this method relatively simple to apply to very complicated multi-phase flow problems. In order to demonstrate its applicability and usefulness, three case studies have been made. The phenomena considered are (1) corium dispersion in DCH, (2) corium spreading in BWR MARK-I containment, and (3) incore boil-off and heating process. The results of these studies clearly indicate the effectiveness of their stepwise integral scaling method. Such a simple and systematic scaling method has not been previously available to severe accident analyses

  1. High-energy heavy ion testing of VLSI devices for single event ...

    Indian Academy of Sciences (India)

    Unknown

    per describes the high-energy heavy ion radiation testing of VLSI devices for single event upset (SEU) ... The experimental set up employed to produce low flux of heavy ions viz. silicon ... through which they pass, leaving behind a wake of elec- ... for use in Bus Management Unit (BMU) and bulk CMOS ... was scheduled.

  2. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam; Ghoneim, Mohamed T.; El Boghdady, Nawal; Halawa, Sarah; Iskander, Sophinese M.; Anis, Mohab H.

    2011-01-01

    -designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result

  3. Implementation of a VLSI Level Zero Processing system utilizing the functional component approach

    Science.gov (United States)

    Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.

    1991-01-01

    A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.

  4. An area-efficient path memory structure for VLSI Implementation of high speed Viterbi decoders

    DEFF Research Database (Denmark)

    Paaske, Erik; Pedersen, Steen; Sparsø, Jens

    1991-01-01

    Path storage and selection methods for Viterbi decoders are investigated with special emphasis on VLSI implementations. Two well-known algorithms, the register exchange, algorithm, REA, and the trace back algorithm, TBA, are considered. The REA requires the smallest number of storage elements...

  5. VLSI top-down design based on the separation of hierarchies

    NARCIS (Netherlands)

    Spaanenburg, L.; Broekema, A.; Leenstra, J.; Huys, C.

    1986-01-01

    Despite the presence of structure, interactions between the three views on VLSI design still lead to lengthy iterations. By separating the hierarchies for the respective views, the interactions are reduced. This separated hierarchy allows top-down design with functional abstractions as exemplified

  6. Integrated laboratory scale demonstration experiment of the hybrid sulphur cycle and preliminary scale-up

    International Nuclear Information System (INIS)

    Leybros, J.; Rivalier, P.; Saturnin, A.; Charton, S.

    2010-01-01

    The hybrid sulphur cycle is today one of the most promising processes to produce hydrogen on a massive scale within the scope of high temperature nuclear reactors development. Thus, the Fuel Cycle Technology Department at CEA Marcoule is involved in studying the hybrid sulphur process from a technical and economical performance standpoint. Based on mass and energy balance calculations, a ProsimPlus TM flow sheet and a commercial plant design were prepared. This work includes a study on sizing of the main equipment. The capital cost has been estimated using the major characteristics of main equipment based upon formulae and charts published in literature. A specific approach has been developed for electrolysers. Operational costs are also proposed for a plant producing 1000 mol/s H 2 . Bench scale and pilot experiments must focus on the electrochemical step due to limited experimental data. Thus, a pilot plant with a hydrogen capacity of 100 NL/h was built with the aim of acquiring technical and technological data for electrolysis. This pilot plant was designed to cover a wide range of operating conditions: sulphuric acid concentrations up to 60 wt.%, temperatures up to 100 deg. C and pressures up to 10 bar. New materials and structures recently developed for fuel cells, which are expected to yield significant performance improvements when applied to classical electrochemical processes, will be tested. All experiments will be coupled with phenomenological simulation tools developed jointly with the experimental programme. (authors)

  7. Fractionaly Integrated Flux model and Scaling Laws in Weather and Climate

    Science.gov (United States)

    Schertzer, Daniel; Lovejoy, Shaun

    2013-04-01

    The Fractionaly Integrated Flux model (FIF) has been extensively used to model intermittent observables, like the velocity field, by defining them with the help of a fractional integration of a conservative (i.e. strictly scale invariant) flux, such as the turbulent energy flux. It indeed corresponds to a well-defined modelling that yields the observed scaling laws. Generalised Scale Invariance (GSI) enables FIF to deal with anisotropic fractional integrations and has been rather successful to define and model a unique regime of scaling anisotropic turbulence up to planetary scales. This turbulence has an effective dimension of 23/9=2.55... instead of the classical hypothesised 2D and 3D turbulent regimes, respectively for large and small spatial scales. It therefore theoretically eliminates a non plausible "dimension transition" between these two regimes and the resulting requirement of a turbulent energy "mesoscale gap", whose empirical evidence has been brought more and more into question. More recently, GSI-FIF was used to analyse climate, therefore at much larger time scales. Indeed, the 23/9-dimensional regime necessarily breaks up at the outer spatial scales. The corresponding transition range, which can be called "macroweather", seems to have many interesting properties, e.g. it rather corresponds to a fractional differentiation in time with a roughly flat frequency spectrum. Furthermore, this transition yields the possibility to have at much larger time scales scaling space-time climate fluctuations with a much stronger scaling anisotropy between time and space. Lovejoy, S. and D. Schertzer (2013). The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge Press (in press). Schertzer, D. et al. (1997). Fractals 5(3): 427-471. Schertzer, D. and S. Lovejoy (2011). International Journal of Bifurcation and Chaos 21(12): 3417-3456.

  8. Integration of expression data in genome-scale metabolic network reconstructions

    Directory of Open Access Journals (Sweden)

    Anna S. Blazier

    2012-08-01

    Full Text Available With the advent of high-throughput technologies, the field of systems biology has amassed an abundance of omics data, quantifying thousands of cellular components across a variety of scales, ranging from mRNA transcript levels to metabolite quantities. Methods are needed to not only integrate this omics data but to also use this data to heighten the predictive capabilities of computational models. Several recent studies have successfully demonstrated how flux balance analysis (FBA, a constraint-based modeling approach, can be used to integrate transcriptomic data into genome-scale metabolic network reconstructions to generate predictive computational models. In this review, we summarize such FBA-based methods for integrating expression data into genome-scale metabolic network reconstructions, highlighting their advantages as well as their limitations.

  9. Dynamic Reactive Power Compensation of Large Scale Wind Integrated Power System

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2015-01-01

    wind turbines especially wind farms with additional grid support functionalities like dynamic support (e,g dynamic reactive power support etc.) and ii) refurbishment of existing conventional central power plants to synchronous condensers could be one of the efficient, reliable and cost effective option......Due to progressive displacement of conventional power plants by wind turbines, dynamic security of large scale wind integrated power systems gets significantly compromised. In this paper we first highlight the importance of dynamic reactive power support/voltage security in large scale wind...... integrated power systems with least presence of conventional power plants. Then we propose a mixed integer dynamic optimization based method for optimal dynamic reactive power allocation in large scale wind integrated power systems. One of the important aspects of the proposed methodology is that unlike...

  10. Wafer-scale integrated micro-supercapacitors on an ultrathin and highly flexible biomedical platform.

    Science.gov (United States)

    Maeng, Jimin; Meng, Chuizhou; Irazoqui, Pedro P

    2015-02-01

    We present wafer-scale integrated micro-supercapacitors on an ultrathin and highly flexible parylene platform, as progress toward sustainably powering biomedical microsystems suitable for implantable and wearable applications. All-solid-state, low-profile (supercapacitors are formed on an ultrathin (~20 μm) freestanding parylene film by a wafer-scale parylene packaging process in combination with a polyaniline (PANI) nanowire growth technique assisted by surface plasma treatment. These micro-supercapacitors are highly flexible and shown to be resilient toward flexural stress. Further, direct integration of micro-supercapacitors into a radio frequency (RF) rectifying circuit is achieved on a single parylene platform, yielding a complete RF energy harvesting microsystem. The system discharging rate is shown to improve by ~17 times in the presence of the integrated micro-supercapacitors. This result suggests that the integrated micro-supercapacitor technology described herein is a promising strategy for sustainably powering biomedical microsystems dedicated to implantable and wearable applications.

  11. Effect of Integrated Pest Management Training on Ugandan Small-Scale Farmers

    DEFF Research Database (Denmark)

    Clausen, Anna Sabine; Jørs, Erik; Atuhaire, Aggrey

    2017-01-01

    Small-scale farmers in developing countries use hazardous pesticides taking few or no safety measures. Farmer field schools (FFSs) teaching integrated pest management (IPM) have been shown to reduce pesticide use among trained farmers. This cross-sectional study compares pesticide-related knowledge......-reported symptoms. The study supports IPM as a method to reduce pesticide use and potential exposure and to improve pesticide-related KAP among small-scale farmers in developing countries....

  12. Measurements of Turbulent Flame Speed and Integral Length Scales in a Lean Stationary Premixed Flame

    OpenAIRE

    Klingmann, Jens; Johansson, Bengt

    1998-01-01

    Turbulent premixed natural gas - air flame velocities have been measured in a stationary axi-symmetric burner using LDA. The flame was stabilized by letting the flow retard toward a stagnation plate downstream of the burner exit. Turbulence was generated by letting the flow pass through a plate with drilled holes. Three different hole diameters were used, 3, 6 and 10 mm, in order to achieve different turbulent length scales. Turbulent integral length scales were measured using two-point LD...

  13. How Does Scale of Implementation Impact the Environmental Sustainability of Wastewater Treatment Integrated with Resource Recovery?

    Science.gov (United States)

    Cornejo, Pablo K; Zhang, Qiong; Mihelcic, James R

    2016-07-05

    Energy and resource consumptions required to treat and transport wastewater have led to efforts to improve the environmental sustainability of wastewater treatment plants (WWTPs). Resource recovery can reduce the environmental impact of these systems; however, limited research has considered how the scale of implementation impacts the sustainability of WWTPs integrated with resource recovery. Accordingly, this research uses life cycle assessment (LCA) to evaluate how the scale of implementation impacts the environmental sustainability of wastewater treatment integrated with water reuse, energy recovery, and nutrient recycling. Three systems were selected: a septic tank with aerobic treatment at the household scale, an advanced water reclamation facility at the community scale, and an advanced water reclamation facility at the city scale. Three sustainability indicators were considered: embodied energy, carbon footprint, and eutrophication potential. This study determined that as with economies of scale, there are benefits to centralization of WWTPs with resource recovery in terms of embodied energy and carbon footprint; however, the community scale was shown to have the lowest eutrophication potential. Additionally, technology selection, nutrient control practices, system layout, and topographical conditions may have a larger impact on environmental sustainability than the implementation scale in some cases.

  14. Design Implementation and Testing of a VLSI High Performance ASIC for Extracting the Phase of a Complex Signal

    National Research Council Canada - National Science Library

    Altmeyer, Ronald

    2002-01-01

    This thesis documents the research, circuit design, and simulation testing of a VLSI ASIC which extracts phase angle information from a complex sampled signal using the arctangent relationship: (phi=tan/-1 (Q/1...

  15. Design and implementation of interface units for high speed fiber optics local area networks and broadband integrated services digital networks

    Science.gov (United States)

    Tobagi, Fouad A.; Dalgic, Ismail; Pang, Joseph

    1990-01-01

    The design and implementation of interface units for high speed Fiber Optic Local Area Networks and Broadband Integrated Services Digital Networks are discussed. During the last years, a number of network adapters that are designed to support high speed communications have emerged. This approach to the design of a high speed network interface unit was to implement package processing functions in hardware, using VLSI technology. The VLSI hardware implementation of a buffer management unit, which is required in such architectures, is described.

  16. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DEFF Research Database (Denmark)

    King, Zachary A.; Lu, Justin; Dräger, Andreas

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repo...

  17. Integrated Graduate and Continuing Education in Protein Chromatography for Bioprocess Development and Scale-Up

    Science.gov (United States)

    Carta, Jungbauer

    2011-01-01

    We describe an intensive course that integrates graduate and continuing education focused on the development and scale-up of chromatography processes used for the recovery and purification of proteins with special emphasis on biotherapeutics. The course includes lectures, laboratories, teamwork, and a design exercise and offers a complete view of…

  18. Some effects of integrated production planning in large-scale kitchens

    DEFF Research Database (Denmark)

    Engelund, Eva Høy; Friis, Alan; Jacobsen, Peter

    2005-01-01

    Integrated production planning in large-scale kitchens proves advantageous for increasing the overall quality of the food produced and the flexibility in terms of a diverse food supply. The aim is to increase the flexibility and the variability in the production as well as the focus on freshness ...

  19. The impact of continuous integration on other software development practices: a large-scale empirical study

    NARCIS (Netherlands)

    Zhao, Y.; Serebrenik, A.; Zhou, Y.; Filkov, V.; Vasilescu, B.N.

    2017-01-01

    Continuous Integration (CI) has become a disruptive innovation in software development: with proper tool support and adoption, positive effects have been demonstrated for pull request throughput and scaling up of project sizes. As any other innovation, adopting CI implies adapting existing practices

  20. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    Science.gov (United States)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  1. Adequacy of power-to-volume scaling philosophy to simulate natural circulation in Integral Test Facilities

    International Nuclear Information System (INIS)

    Nayak, A.K.; Vijayan, P.K.; Saha, D.; Venkat Raj, V.; Aritomi, Masanori

    1998-01-01

    Theoretical and experimental investigations were carried out to study the adequacy of power-to-volume scaling philosophy for the simulation of natural circulation and to establish the scaling philosophy applicable for the design of the Integral Test Facility (ITF-AHWR) for the Indian Advanced Heavy Water Reactor (AHWR). The results indicate that a reduction in the flow channel diameter of the scaled facility as required by the power-to-volume scaling philosophy may affect the simulation of natural circulation behaviour of the prototype plants. This is caused by the distortions due to the inability to simulate the frictional resistance of the scaled facility. Hence, it is recommended that the flow channel diameter of the scaled facility should be as close as possible to the prototype. This was verified by comparing the natural circulation behaviour of a prototype 220 MWe Indian PHWR and its scaled facility (FISBE-1) designed based on power-to-volume scaling philosophy. It is suggested from examinations using a mathematical model and a computer code that the FISBE-1 simulates the steady state and the general trend of transient natural circulation behaviour of the prototype reactor adequately. Finally the proposed scaling method was applied for the design of the ITF-AHWR. (author)

  2. Towards an integrated multiscale simulation of turbulent clouds on PetaScale computers

    International Nuclear Information System (INIS)

    Wang Lianping; Ayala, Orlando; Parishani, Hossein; Gao, Guang R; Kambhamettu, Chandra; Li Xiaoming; Rossi, Louis; Orozco, Daniel; Torres, Claudio; Grabowski, Wojciech W; Wyszogrodzki, Andrzej A; Piotrowski, Zbigniew

    2011-01-01

    The development of precipitating warm clouds is affected by several effects of small-scale air turbulence including enhancement of droplet-droplet collision rate by turbulence, entrainment and mixing at the cloud edges, and coupling of mechanical and thermal energies at various scales. Large-scale computation is a viable research tool for quantifying these multiscale processes. Specifically, top-down large-eddy simulations (LES) of shallow convective clouds typically resolve scales of turbulent energy-containing eddies while the effects of turbulent cascade toward viscous dissipation are parameterized. Bottom-up hybrid direct numerical simulations (HDNS) of cloud microphysical processes resolve fully the dissipation-range flow scales but only partially the inertial subrange scales. it is desirable to systematically decrease the grid length in LES and increase the domain size in HDNS so that they can be better integrated to address the full range of scales and their coupling. In this paper, we discuss computational issues and physical modeling questions in expanding the ranges of scales realizable in LES and HDNS, and in bridging LES and HDNS. We review our on-going efforts in transforming our simulation codes towards PetaScale computing, in improving physical representations in LES and HDNS, and in developing better methods to analyze and interpret the simulation results.

  3. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  4. Cross-scale phenological data integration to benefit resource management and monitoring

    Science.gov (United States)

    Richardson, Andrew D.; Weltzin, Jake F.; Morisette, Jeffrey T.

    2017-01-01

    Climate change is presenting new challenges for natural resource managers charged with maintaining sustainable ecosystems and landscapes. Phenology, a branch of science dealing with seasonal natural phenomena (bird migration or plant flowering in response to weather changes, for example), bridges the gap between the biosphere and the climate system. Phenological processes operate across scales that span orders of magnitude—from leaf to globe and from days to seasons—making phenology ideally suited to multiscale, multiplatform data integration and delivery of information at spatial and temporal scales suitable to inform resource management decisions.A workshop report: Workshop held June 2016 to investigate opportunities and challenges facing multi-scale, multi-platform integration of phenological data to support natural resource management decision-making.

  5. An Integrated Assessment of Location-Dependent Scaling for Microalgae Biofuel Production Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, Andre M.; Abodeely, Jared; Skaggs, Richard; Moeglein, William AM; Newby, Deborah T.; Venteris, Erik R.; Wigmosta, Mark S.

    2014-06-19

    Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting/design through processing/upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are addressed in part by applying the Integrated Assessment Framework (IAF)—an integrated multi-scale modeling, analysis, and data management suite—to address key issues in developing and operating an open-pond facility by analyzing how variability and uncertainty in space and time affect algal feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. The IAF was applied to a set of sites previously identified as having the potential to cumulatively produce 5 billion-gallons/year in the southeastern U.S. and results indicate costs can be reduced by selecting the most effective processing technology pathway and scaling downstream processing capabilities to fit site-specific growing conditions, available resources, and algal strains.

  6. TIGER: Toolbox for integrating genome-scale metabolic models, expression data, and transcriptional regulatory networks

    Directory of Open Access Journals (Sweden)

    Jensen Paul A

    2011-09-01

    Full Text Available Abstract Background Several methods have been developed for analyzing genome-scale models of metabolism and transcriptional regulation. Many of these methods, such as Flux Balance Analysis, use constrained optimization to predict relationships between metabolic flux and the genes that encode and regulate enzyme activity. Recently, mixed integer programming has been used to encode these gene-protein-reaction (GPR relationships into a single optimization problem, but these techniques are often of limited generality and lack a tool for automating the conversion of rules to a coupled regulatory/metabolic model. Results We present TIGER, a Toolbox for Integrating Genome-scale Metabolism, Expression, and Regulation. TIGER converts a series of generalized, Boolean or multilevel rules into a set of mixed integer inequalities. The package also includes implementations of existing algorithms to integrate high-throughput expression data with genome-scale models of metabolism and transcriptional regulation. We demonstrate how TIGER automates the coupling of a genome-scale metabolic model with GPR logic and models of transcriptional regulation, thereby serving as a platform for algorithm development and large-scale metabolic analysis. Additionally, we demonstrate how TIGER's algorithms can be used to identify inconsistencies and improve existing models of transcriptional regulation with examples from the reconstructed transcriptional regulatory network of Saccharomyces cerevisiae. Conclusion The TIGER package provides a consistent platform for algorithm development and extending existing genome-scale metabolic models with regulatory networks and high-throughput data.

  7. Integrating Traditional Ecological Knowledge and Ecological Science: a Question of Scale

    Directory of Open Access Journals (Sweden)

    Catherine A. Gagnon

    2009-12-01

    Full Text Available The benefits and challenges of integrating traditional ecological knowledge and scientific knowledge have led to extensive discussions over the past decades, but much work is still needed to facilitate the articulation and co-application of these two types of knowledge. Through two case studies, we examined the integration of traditional ecological knowledge and scientific knowledge by emphasizing their complementarity across spatial and temporal scales. We expected that combining Inuit traditional ecological knowledge and scientific knowledge would expand the spatial and temporal scales of currently documented knowledge on the arctic fox (Vulpes lagopus and the greater snow goose (Chen caerulescens atlantica, two important tundra species. Using participatory approaches in Mittimatalik (also known as Pond Inlet, Nunavut, Canada, we documented traditional ecological knowledge about these species and found that, in fact, it did expand the spatial and temporal scales of current scientific knowledge for local arctic fox ecology. However, the benefits were not as apparent for snow goose ecology, probably because of the similar spatial and temporal observational scales of the two types of knowledge for this species. Comparing sources of knowledge at similar scales allowed us to gain confidence in our conclusions and to identify areas of disagreement that should be studied further. Emphasizing complementarities across scales was more powerful for generating new insights and hypotheses. We conclude that determining the scales of the observations that form the basis for traditional ecological knowledge and scientific knowledge represents a critical step when evaluating the benefits of integrating these two types of knowledge. This is also critical when examining the congruence or contrast between the two types of knowledge for a given subject.

  8. An Integrative Bioinformatics Framework for Genome-scale Multiple Level Network Reconstruction of Rice

    Directory of Open Access Journals (Sweden)

    Liu Lili

    2013-06-01

    Full Text Available Understanding how metabolic reactions translate the genome of an organism into its phenotype is a grand challenge in biology. Genome-wide association studies (GWAS statistically connect genotypes to phenotypes, without any recourse to known molecular interactions, whereas a molecular mechanistic description ties gene function to phenotype through gene regulatory networks (GRNs, protein-protein interactions (PPIs and molecular pathways. Integration of different regulatory information levels of an organism is expected to provide a good way for mapping genotypes to phenotypes. However, the lack of curated metabolic model of rice is blocking the exploration of genome-scale multi-level network reconstruction. Here, we have merged GRNs, PPIs and genome-scale metabolic networks (GSMNs approaches into a single framework for rice via omics’ regulatory information reconstruction and integration. Firstly, we reconstructed a genome-scale metabolic model, containing 4,462 function genes, 2,986 metabolites involved in 3,316 reactions, and compartmentalized into ten subcellular locations. Furthermore, 90,358 pairs of protein-protein interactions, 662,936 pairs of gene regulations and 1,763 microRNA-target interactions were integrated into the metabolic model. Eventually, a database was developped for systematically storing and retrieving the genome-scale multi-level network of rice. This provides a reference for understanding genotype-phenotype relationship of rice, and for analysis of its molecular regulatory network.

  9. Role of two insect growth regulators in integrated pest management of citrus scales.

    Science.gov (United States)

    Grafton-Cardwell, E E; Lee, J E; Stewart, J R; Olsen, K D

    2006-06-01

    Portions of two commercial citrus orchards were treated for two consecutive years with buprofezin or three consecutive years with pyriproxyfen in a replicated plot design to determine the long-term impact of these insect growth regulators (IGRs) on the San Joaquin Valley California integrated pest management program. Pyriproxyfen reduced the target pest, California red scale, Aonidiella aurantii Maskell, to nondetectable levels on leaf samples approximately 4 mo after treatment. Pyriproxyfen treatments reduced the California red scale parasitoid Aphytis melinus DeBach to a greater extent than the parasitoid Comperiella bifasciata Howard collected on sticky cards. Treatments of lemons Citrus limon (L.) Burm. f. infested with scale parasitized by A. melinus showed only 33% direct mortality of the parasitoid, suggesting the population reduction observed on sticky cards was due to low host density. Three years of pyriproxyfen treatments did not maintain citricola scale, Coccus pseudomagnoliarum (Kuwana), below the treatment threshold and cottony cushion scale, Icerya purchasi Maskell, was slowly but incompletely controlled. Buprofezin reduced California red scale to very low but detectable levels approximately 5 mo after treatment. Buprofezin treatments resulted in similar levels of reduction of the two parasitoids A. melinus and C. bifasciata collected on sticky cards. Treatments of lemons infested with scale parasitized by A. melinus showed only 7% mortality of the parasitoids, suggesting the population reduction observed on sticky cards was due to low host density. Citricola scale was not present in this orchard, and cottony cushion scale was slowly and incompletely controlled by buprofezin. These field plots demonstrated that IGRs can act as organophosphate insecticide replacements for California red scale control; however, their narrower spectrum of activity and disruption of coccinellid beetles can allow other scale species to attain primary pest status.

  10. Root Systems Biology: Integrative Modeling across Scales, from Gene Regulatory Networks to the Rhizosphere1

    Science.gov (United States)

    Hill, Kristine; Porco, Silvana; Lobet, Guillaume; Zappala, Susan; Mooney, Sacha; Draye, Xavier; Bennett, Malcolm J.

    2013-01-01

    Genetic and genomic approaches in model organisms have advanced our understanding of root biology over the last decade. Recently, however, systems biology and modeling have emerged as important approaches, as our understanding of root regulatory pathways has become more complex and interpreting pathway outputs has become less intuitive. To relate root genotype to phenotype, we must move beyond the examination of interactions at the genetic network scale and employ multiscale modeling approaches to predict emergent properties at the tissue, organ, organism, and rhizosphere scales. Understanding the underlying biological mechanisms and the complex interplay between systems at these different scales requires an integrative approach. Here, we describe examples of such approaches and discuss the merits of developing models to span multiple scales, from network to population levels, and to address dynamic interactions between plants and their environment. PMID:24143806

  11. Hypersingular integral equations, waveguiding effects in Cantorian Universe and genesis of large scale structures

    International Nuclear Information System (INIS)

    Iovane, G.; Giordano, P.

    2005-01-01

    In this work we introduce the hypersingular integral equations and analyze a realistic model of gravitational waveguides on a cantorian space-time. A waveguiding effect is considered with respect to the large scale structure of the Universe, where the structure formation appears as if it were a classically self-similar random process at all astrophysical scales. The result is that it seems we live in an El Naschie's o (∞) Cantorian space-time, where gravitational lensing and waveguiding effects can explain the appearing Universe. In particular, we consider filamentary and planar large scale structures as possible refraction channels for electromagnetic radiation coming from cosmological structures. From this vision the Universe appears like a large self-similar adaptive mirrors set, thanks to three numerical simulations. Consequently, an infinite Universe is just an optical illusion that is produced by mirroring effects connected with the large scale structure of a finite and not a large Universe

  12. Neural networks supporting audiovisual integration for speech: A large-scale lesion study.

    Science.gov (United States)

    Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius

    2018-06-01

    Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Large-scale building integrated photovoltaics field trial. First technical report - installation phase

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    This report summarises the results of the first eighteen months of the Large-Scale Building Integrated Photovoltaic Field Trial focussing on technical aspects. The project aims included increasing awareness and application of the technology, raising the UK capabilities in application of the technology, and assessing the potential for building integrated photovoltaics (BIPV). Details are given of technology choices; project organisation, cost, and status; and the evaluation criteria. Installations of BIPV described include University buildings, commercial centres, and a sports stadium, wildlife park, church hall, and district council building. Lessons learnt are discussed, and a further report covering monitoring aspects is planned.

  14. An analog VLSI real time optical character recognition system based on a neural architecture

    International Nuclear Information System (INIS)

    Bo, G.; Caviglia, D.; Valle, M.

    1999-01-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system

  15. An analog VLSI real time optical character recognition system based on a neural architecture

    Energy Technology Data Exchange (ETDEWEB)

    Bo, G.; Caviglia, D.; Valle, M. [Genoa Univ. (Italy). Dip. of Biophysical and Electronic Engineering

    1999-03-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system.

  16. International Conference on VLSI, Communication, Advanced Devices, Signals & Systems and Networking

    CERN Document Server

    Shirur, Yasha; Prasad, Rekha

    2013-01-01

    This book is a collection of papers presented by renowned researchers, keynote speakers and academicians in the International Conference on VLSI, Communication, Analog Designs, Signals and Systems, and Networking (VCASAN-2013), organized by B.N.M. Institute of Technology, Bangalore, India during July 17-19, 2013. The book provides global trends in cutting-edge technologies in electronics and communication engineering. The content of the book is useful to engineers, researchers and academicians as well as industry professionals.

  17. IoT European Large-Scale Pilots – Integration, Experimentation and Testing

    OpenAIRE

    Guillén, Sergio Gustavo; Sala, Pilar; Fico, Giuseppe; Arredondo, Maria Teresa; Cano, Alicia; Posada, Jorge; Gutierrez, Germán; Palau, Carlos; Votis, Konstantinos; Verdouw, Cor N.; Wolfert, Sjaak; Beers, George; Sundmaeker, Harald; Chatzikostas, Grigoris; Ziegler, Sébastien

    2017-01-01

    The IoT European Large-Scale Pilots Programme includes the innovation consortia that are collaborating to foster the deployment of IoT solutions in Europe through the integration of advanced IoT technologies across the value chain, demonstration of multiple IoT applications at scale and in a usage context, and as close as possible to operational conditions. The programme projects are targeted, goal-driven initiatives that propose IoT approaches to specific real-life industrial/societal challe...

  18. The MIRAGE project: large scale radionuclide transport investigations and integral migration experiments

    International Nuclear Information System (INIS)

    Come, B.; Bidoglio, G.; Chapman, N.

    1986-01-01

    Predictions of radionuclide migration through the geosphere must be supported by large-scale, long-term investigations. Several research areas of the MIRAGE Project are devoted to acquiring reliable data for developing and validating models. Apart from man-made migration experiments in boreholes and/or underground galleries, attention is paid to natural geological migration systems which have been active for very long time spans. The potential role of microbial activity, either resident or introduced into the host media, is also considered. In order to clarify basic mechanisms, smaller scale ''integral'' migration experiments under fully controlled laboratory conditions are also carried out using real waste forms and representative geological media. (author)

  19. The influence of a scaled boundary response on integral system transient behavior

    International Nuclear Information System (INIS)

    Dimenna, R.A.; Kullberg, C.M.

    1989-01-01

    Scaling relationships associated with the thermal-hydraulic response of a closed-loop system are applied to a calculational assessment of a feed-and-bleed recovery in a nuclear reactor integral effects test. The analysis demonstrates both the influence of scale on the system response and the ability of the thermal-hydraulics code to represent those effects. The qualitative response of the fluid is shown to be coupled to the behavior of the bounding walls through the energy equation. The results of the analysis described in this paper influence the determination of computer code applicability. The sensitivity of the code response to scaling variations introduced in the analysis is found to be appropriate with respect to scaling criteria determined from the scaling literature. Differences in the system response associated with different scaling criteria are found to be plausible and easily explained using well-known principles of heat transfer. Therefore, it is concluded that RELAP5/MOD2 can adequately represent the scaled effects of heat transfer boundary conditions of the thermal-hydraulic calculations through the mechanism of communicating walls. The results of the analysis also serve to clarify certain aspects of experiment and facility design

  20. The prospect of modern thermomechanics in structural integrity calculations of large-scale pressure vessels

    Science.gov (United States)

    Fekete, Tamás

    2018-05-01

    Structural integrity calculations play a crucial role in designing large-scale pressure vessels. Used in the electric power generation industry, these kinds of vessels undergo extensive safety analyses and certification procedures before deemed feasible for future long-term operation. The calculations are nowadays directed and supported by international standards and guides based on state-of-the-art results of applied research and technical development. However, their ability to predict a vessel's behavior under accidental circumstances after long-term operation is largely limited by the strong dependence of the analysis methodology on empirical models that are correlated to the behavior of structural materials and their changes during material aging. Recently a new scientific engineering paradigm, structural integrity has been developing that is essentially a synergistic collaboration between a number of scientific and engineering disciplines, modeling, experiments and numerics. Although the application of the structural integrity paradigm highly contributed to improving the accuracy of safety evaluations of large-scale pressure vessels, the predictive power of the analysis methodology has not yet improved significantly. This is due to the fact that already existing structural integrity calculation methodologies are based on the widespread and commonly accepted 'traditional' engineering thermal stress approach, which is essentially based on the weakly coupled model of thermomechanics and fracture mechanics. Recently, a research has been initiated in MTA EK with the aim to review and evaluate current methodologies and models applied in structural integrity calculations, including their scope of validity. The research intends to come to a better understanding of the physical problems that are inherently present in the pool of structural integrity problems of reactor pressure vessels, and to ultimately find a theoretical framework that could serve as a well

  1. VLSI Architecture for Configurable and Low-Complexity Design of Hard-Decision Viterbi Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2016-06-01

    Full Text Available Convolutional encoding and data decoding are fundamental processes in convolutional error correction. One of the most popular error correction methods in decoding is the Viterbi algorithm. It is extensively implemented in many digital communication applications. Its VLSI design challenges are about area, speed, power, complexity and configurability. In this research, we specifically propose a VLSI architecture for a configurable and low-complexity design of a hard-decision Viterbi decoding algorithm. The configurable and low-complexity design is achieved by designing a generic VLSI architecture, optimizing each processing element (PE at the logical operation level and designing a conditional adapter. The proposed design can be configured for any predefined number of trace-backs, only by changing the trace-back parameter value. Its computational process only needs N + 2 clock cycles latency, with N is the number of trace-backs. Its configurability function has been proven for N = 8, N = 16, N = 32 and N = 64. Furthermore, the proposed design was synthesized and evaluated in Xilinx and Altera FPGA target boards for area consumption and speed performance.

  2. Progress on scaling up integrated services for sexual and reproductive health and HIV.

    Science.gov (United States)

    Dickinson, Clare; Attawell, Kathy; Druce, Nel

    2009-11-01

    This paper considers new developments to strengthen sexual and reproductive health and HIV linkages and discusses factors that continue to impede progress. It is based on a previous review undertaken for the United Kingdom Department for International Development in 2006 that examined the constraints and opportunities to scaling up these linkages. We argue that, despite growing evidence that linking sexual and reproductive health and HIV is feasible and beneficial, few countries have achieved significant scale-up of integrated service provision. A lack of common understanding of terminology and clear technical operational guidance, and separate policy, institutional and financing processes continue to represent significant constraints. We draw on experience with tuberculosis and HIV integration to highlight some lessons. The paper concludes that there is little evidence to determine whether funding for health systems is strengthening linkages and we make several recommendations to maximize opportunities represented by recent developments.

  3. Genome scale models of yeast: towards standardized evaluation and consistent omic integration

    DEFF Research Database (Denmark)

    Sanchez, Benjamin J.; Nielsen, Jens

    2015-01-01

    Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published and are curre......Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published...... in which all levels of omics data (from gene expression to flux) have been integrated in yeast GEMs. Relevant conclusions and current challenges for both GEM evaluation and omic integration are highlighted....

  4. Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker

    Science.gov (United States)

    Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong

    2017-10-01

    Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.

  5. At the Nexus of History, Ecology, and Hydrobiogeochemistry: Improved Predictions across Scales through Integration.

    Science.gov (United States)

    Stegen, James C

    2018-01-01

    To improve predictions of ecosystem function in future environments, we need to integrate the ecological and environmental histories experienced by microbial communities with hydrobiogeochemistry across scales. A key issue is whether we can derive generalizable scaling relationships that describe this multiscale integration. There is a strong foundation for addressing these challenges. We have the ability to infer ecological history with null models and reveal impacts of environmental history through laboratory and field experimentation. Recent developments also provide opportunities to inform ecosystem models with targeted omics data. A major next step is coupling knowledge derived from such studies with multiscale modeling frameworks that are predictive under non-steady-state conditions. This is particularly true for systems spanning dynamic interfaces, which are often hot spots of hydrobiogeochemical function. We can advance predictive capabilities through a holistic perspective focused on the nexus of history, ecology, and hydrobiogeochemistry.

  6. Economies of scale and vertical integration in the investor-owed electric utility industry

    International Nuclear Information System (INIS)

    Thompson, H.G.; Islam, M.; Rose, K.

    1996-01-01

    This report analyzes the nature of costs in a vertically integrated electric utility. Findings provide new insights into the operations of the vertically integrated electric utility and supports earlier research on economics of scale and density; results also provide insights for policy makers dealing with electric industry restructuring issues such as competitive structure and mergers. Overall, results indicate that for most firms in the industry, average costs would not be reduced through expansion of generation, numbers of customers, or the delivery system. Evidently, the combination of benefits from large-scale technologies, managerial experience, coordination, or load diversity have been exhausted by the larger firms in the industry; however many firms would benefit from reducing their generation-to-sales ratio and by increasing sales to their existing customer base. Three cost models were used in the analysis

  7. GIGGLE: a search engine for large-scale integrated genome analysis

    Science.gov (United States)

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-01-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation. PMID:29309061

  8. GIGGLE: a search engine for large-scale integrated genome analysis.

    Science.gov (United States)

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-02-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation.

  9. “HABITAT MAPPING” GEODATABASE, AN INTEGRATED INTERDISCIPLINARY AND MULTI-SCALE APPROACH FOR DATA MANAGEMENT

    OpenAIRE

    Grande, Valentina; Angeletti, Lorenzo; Campiani, Elisabetta; Conese, Ilaria; Foglini, Federica; Leidi, Elisa; Mercorella, Alessandra; Taviani, Marco

    2016-01-01

    Abstract Historically, a number of different key concepts and methods dealing with marine habitat classifications and mapping have been developed to date. The EU CoCoNET project provides a new attempt in establishing an integrated approach on the definition of habitats. This scheme combines multi-scale geological and biological data, in fact it consists of three levels (Geomorphological level, Substrate level and Biological level) which in turn are divided into several h...

  10. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    Science.gov (United States)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  11. Battery energy storage systems: Assessment for small-scale renewable energy integration

    Energy Technology Data Exchange (ETDEWEB)

    Nair, Nirmal-Kumar C.; Garimella, Niraj [Power Systems Group, Department of Electrical and Computer Engineering, The University of Auckland, 38 Princes Street, Science Centre, Auckland 1142 (New Zealand)

    2010-11-15

    Concerns arising due to the variability and intermittency of renewable energy sources while integrating with the power grid can be mitigated to an extent by incorporating a storage element within the renewable energy harnessing system. Thus, battery energy storage systems (BESS) are likely to have a significant impact in the small-scale integration of renewable energy sources into commercial building and residential dwelling. These storage technologies not only enable improvements in consumption levels from renewable energy sources but also provide a range of technical and monetary benefits. This paper provides a modelling framework to be able to quantify the associated benefits of renewable resource integration followed by an overview of various small-scale energy storage technologies. A simple, practical and comprehensive assessment of battery energy storage technologies for small-scale renewable applications based on their technical merit and economic feasibility is presented. Software such as Simulink and HOMER provides the platforms for technical and economic assessments of the battery technologies respectively. (author)

  12. Studies of Sub-Synchronous Oscillations in Large-Scale Wind Farm Integrated System

    Science.gov (United States)

    Yue, Liu; Hang, Mend

    2018-01-01

    With the rapid development and construction of large-scale wind farms and grid-connected operation, the series compensation wind power AC transmission is gradually becoming the main way of power usage and improvement of wind power availability and grid stability, but the integration of wind farm will change the SSO (Sub-Synchronous oscillation) damping characteristics of synchronous generator system. Regarding the above SSO problem caused by integration of large-scale wind farms, this paper focusing on doubly fed induction generator (DFIG) based wind farms, aim to summarize the SSO mechanism in large-scale wind power integrated system with series compensation, which can be classified as three types: sub-synchronous control interaction (SSCI), sub-synchronous torsional interaction (SSTI), sub-synchronous resonance (SSR). Then, SSO modelling and analysis methods are categorized and compared by its applicable areas. Furthermore, this paper summarizes the suppression measures of actual SSO projects based on different control objectives. Finally, the research prospect on this field is explored.

  13. Integrating Expert Knowledge with Statistical Analysis for Landslide Susceptibility Assessment at Regional Scale

    Directory of Open Access Journals (Sweden)

    Christos Chalkias

    2016-03-01

    Full Text Available In this paper, an integration landslide susceptibility model by combining expert-based and bivariate statistical analysis (Landslide Susceptibility Index—LSI approaches is presented. Factors related with the occurrence of landslides—such as elevation, slope angle, slope aspect, lithology, land cover, Mean Annual Precipitation (MAP and Peak Ground Acceleration (PGA—were analyzed within a GIS environment. This integrated model produced a landslide susceptibility map which categorized the study area according to the probability level of landslide occurrence. The accuracy of the final map was evaluated by Receiver Operating Characteristics (ROC analysis depending on an independent (validation dataset of landslide events. The prediction ability was found to be 76% revealing that the integration of statistical analysis with human expertise can provide an acceptable landslide susceptibility assessment at regional scale.

  14. The future of genome-scale modeling of yeast through integration of a transcriptional regulatory network

    DEFF Research Database (Denmark)

    Liu, Guodong; Marras, Antonio; Nielsen, Jens

    2014-01-01

    regulatory information is necessary to improve the accuracy and predictive ability of metabolic models. Here we review the strategies for the reconstruction of a transcriptional regulatory network (TRN) for yeast and the integration of such a reconstruction into a flux balance analysis-based metabolic model......Metabolism is regulated at multiple levels in response to the changes of internal or external conditions. Transcriptional regulation plays an important role in regulating many metabolic reactions by altering the concentrations of metabolic enzymes. Thus, integration of the transcriptional....... While many large-scale TRN reconstructions have been reported for yeast, these reconstructions still need to be improved regarding the functionality and dynamic property of the regulatory interactions. In addition, mathematical modeling approaches need to be further developed to efficiently integrate...

  15. Modelling future impacts of air pollution using the multi-scale UK Integrated Assessment Model (UKIAM).

    Science.gov (United States)

    Oxley, Tim; Dore, Anthony J; ApSimon, Helen; Hall, Jane; Kryza, Maciej

    2013-11-01

    Integrated assessment modelling has evolved to support policy development in relation to air pollutants and greenhouse gases by providing integrated simulation tools able to produce quick and realistic representations of emission scenarios and their environmental impacts without the need to re-run complex atmospheric dispersion models. The UK Integrated Assessment Model (UKIAM) has been developed to investigate strategies for reducing UK emissions by bringing together information on projected UK emissions of SO2, NOx, NH3, PM10 and PM2.5, atmospheric dispersion, criteria for protection of ecosystems, urban air quality and human health, and data on potential abatement measures to reduce emissions, which may subsequently be linked to associated analyses of costs and benefits. We describe the multi-scale model structure ranging from continental to roadside, UK emission sources, atmospheric dispersion of emissions, implementation of abatement measures, integration with European-scale modelling, and environmental impacts. The model generates outputs from a national perspective which are used to evaluate alternative strategies in relation to emissions, deposition patterns, air quality metrics and ecosystem critical load exceedance. We present a selection of scenarios in relation to the 2020 Business-As-Usual projections and identify potential further reductions beyond those currently being planned. © 2013.

  16. Planar plane-wave matrix theory at the four loop order: integrability without BMN scaling

    International Nuclear Information System (INIS)

    Fischbacher, Thomas; Klose, Thomas; Plefka, Jan

    2005-01-01

    We study SU(N) plane-wave matrix theory up to fourth perturbative order in its large N planar limit. The effective hamiltonian in the closed su(2) subsector of the model is explicitly computed through a specially tailored computer program to perform large scale distributed symbolic algebra and generation of planar graphs. The number of graphs here was in the deep billions. The outcome of our computation establishes the four-loop integrability of the planar plane-wave matrix model. To elucidate the integrable structure we apply the recent technology of the perturbative asymptotic Bethe ansatz to our model. The resulting S-matrix turns out to be structurally similar but nevertheless distinct to the so far considered long-range spin-chain S-matrices of Inozemtsev, Beisert-Dippel-Staudacher and Arutyunov-Frolov-Staudacher in the AdS/CFT context. In particular our result displays a breakdown of BMN scaling at the four-loop order. That is, while there exists an appropriate identification of the matrix theory mass parameter with the coupling constant of the N=4 superconformal Yang-Mills theory which yields an eighth order lattice derivative for well separated impurities (naively implying BMN scaling) the detailed impurity contact interactions ruin this scaling property at the four-loop order. Moreover we study the issue of 'wrapping' interactions, which show up for the first time at this loop-order through a Konishi descendant length four operator. (author)

  17. The role of large‐scale heat pumps for short term integration of renewable energy

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Blarke, Morten; Hansen, Kenneth

    2011-01-01

    technologies is focusing on natural working fluid hydrocarbons, ammonia, and carbon dioxide. Large-scale heat pumps are crucial for integrating 50% wind power as anticipated to be installed in Denmark in 2020, along with other measures. Also in the longer term heat pumps can contribute to the minimization...... savings with increased wind power and may additionally lead to economic savings in the range of 1,500-1,700 MDKK in total in the period until 2020. Furthermore, the energy system efficiency may be increased due to large heat pumps replacing boiler production. Finally data sheets for large-scale ammonium......In this report the role of large-scale heat pumps in a future energy system with increased renewable energy is presented. The main concepts for large heat pumps in district heating systems are outlined along with the development for heat pump refrigerants. The development of future heat pump...

  18. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    Science.gov (United States)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of

  19. Scale relativity theory and integrative systems biology: 2. Macroscopic quantum-type mechanics.

    Science.gov (United States)

    Nottale, Laurent; Auffray, Charles

    2008-05-01

    In these two companion papers, we provide an overview and a brief history of the multiple roots, current developments and recent advances of integrative systems biology and identify multiscale integration as its grand challenge. Then we introduce the fundamental principles and the successive steps that have been followed in the construction of the scale relativity theory, which aims at describing the effects of a non-differentiable and fractal (i.e., explicitly scale dependent) geometry of space-time. The first paper of this series was devoted, in this new framework, to the construction from first principles of scale laws of increasing complexity, and to the discussion of some tentative applications of these laws to biological systems. In this second review and perspective paper, we describe the effects induced by the internal fractal structures of trajectories on motion in standard space. Their main consequence is the transformation of classical dynamics into a generalized, quantum-like self-organized dynamics. A Schrödinger-type equation is derived as an integral of the geodesic equation in a fractal space. We then indicate how gauge fields can be constructed from a geometric re-interpretation of gauge transformations as scale transformations in fractal space-time. Finally, we introduce a new tentative development of the theory, in which quantum laws would hold also in scale space, introducing complexergy as a measure of organizational complexity. Initial possible applications of this extended framework to the processes of morphogenesis and the emergence of prokaryotic and eukaryotic cellular structures are discussed. Having founded elements of the evolutionary, developmental, biochemical and cellular theories on the first principles of scale relativity theory, we introduce proposals for the construction of an integrative theory of life and for the design and implementation of novel macroscopic quantum-type experiments and devices, and discuss their potential

  20. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    Science.gov (United States)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  1. Quantifying the Impacts of Large Scale Integration of Renewables in Indian Power Sector

    Science.gov (United States)

    Kumar, P.; Mishra, T.; Banerjee, R.

    2017-12-01

    India's power sector is responsible for nearly 37 percent of India's greenhouse gas emissions. For a fast emerging economy like India whose population and energy consumption are poised to rise rapidly in the coming decades, renewable energy can play a vital role in decarbonizing power sector. In this context, India has targeted 33-35 percent emission intensity reduction (with respect to 2005 levels) along with large scale renewable energy targets (100GW solar, 60GW wind, and 10GW biomass energy by 2022) in INDCs submitted at Paris agreement. But large scale integration of renewable energy is a complex process which faces a number of problems like capital intensiveness, matching intermittent loads with least storage capacity and reliability. In this context, this study attempts to assess the technical feasibility of integrating renewables into Indian electricity mix by 2022 and analyze its implications on power sector operations. This study uses TIMES, a bottom up energy optimization model with unit commitment and dispatch features. We model coal and gas fired units discretely with region-wise representation of wind and solar resources. The dispatch features are used for operational analysis of power plant units under ramp rate and minimum generation constraints. The study analyzes India's electricity sector transition for the year 2022 with three scenarios. The base case scenario (no RE addition) along with INDC scenario (with 100GW solar, 60GW wind, 10GW biomass) and low RE scenario (50GW solar, 30GW wind) have been created to analyze the implications of large scale integration of variable renewable energy. The results provide us insights on trade-offs involved in achieving mitigation targets and investment decisions involved. The study also examines operational reliability and flexibility requirements of the system for integrating renewables.

  2. Temporal integration and 1/f power scaling in a circuit model of cerebellar interneurons.

    Science.gov (United States)

    Maex, Reinoud; Gutkin, Boris

    2017-07-01

    Inhibitory interneurons interconnected via electrical and chemical (GABA A receptor) synapses form extensive circuits in several brain regions. They are thought to be involved in timing and synchronization through fast feedforward control of principal neurons. Theoretical studies have shown, however, that whereas self-inhibition does indeed reduce response duration, lateral inhibition, in contrast, may generate slow response components through a process of gradual disinhibition. Here we simulated a circuit of interneurons (stellate and basket cells) of the molecular layer of the cerebellar cortex and observed circuit time constants that could rise, depending on parameter values, to >1 s. The integration time scaled both with the strength of inhibition, vanishing completely when inhibition was blocked, and with the average connection distance, which determined the balance between lateral and self-inhibition. Electrical synapses could further enhance the integration time by limiting heterogeneity among the interneurons and by introducing a slow capacitive current. The model can explain several observations, such as the slow time course of OFF-beam inhibition, the phase lag of interneurons during vestibular rotation, or the phase lead of Purkinje cells. Interestingly, the interneuron spike trains displayed power that scaled approximately as 1/ f at low frequencies. In conclusion, stellate and basket cells in cerebellar cortex, and interneuron circuits in general, may not only provide fast inhibition to principal cells but also act as temporal integrators that build a very short-term memory. NEW & NOTEWORTHY The most common function attributed to inhibitory interneurons is feedforward control of principal neurons. In many brain regions, however, the interneurons are densely interconnected via both chemical and electrical synapses but the function of this coupling is largely unknown. Based on large-scale simulations of an interneuron circuit of cerebellar cortex, we

  3. CMT scaling analysis and distortion evaluation in passive integral test facility

    International Nuclear Information System (INIS)

    Deng Chengcheng; Qin Benke; Wang Han; Chang Huajian

    2013-01-01

    Core makeup tank (CMT) is the crucial device of AP1000 passive core cooling system, and reasonable scaling analysis of CMT plays a key role in the design of passive integral test facilities. H2TS method was used to perform scaling analysis for both circulating mode and draining mode of CMT. And then, the similarity criteria for CMT important processes were applied in the CMT scaling design of the ACME (advanced core-cooling mechanism experiment) facility now being built in China. Furthermore, the scaling distortion results of CMT characteristic Ⅱ groups of ACME were calculated. At last, the reason of scaling distortion was analyzed and the distortion evaluation was conducted for ACME facility. The dominant processes of CMT circulating mode can be adequately simulated in the ACME facility, but the steam condensation process during CMT draining is not well preserved because the excessive CMT mass leads to more energy to be absorbed by cold metal. However, comprehensive analysis indicates that the ACME facility with high-pressure simulation scheme is able to properly represent CMT's important phenomena and processes of prototype nuclear plant. (authors)

  4. Application and further development of ion implantation for very large scale integration. Pt. 2

    International Nuclear Information System (INIS)

    Haberger, K.; Ryssel, H.; Hoffmann, K.

    1982-08-01

    Ion implantation, used as a dopant technology, provides very well-controlled doping but is dependent on the usual masking techniques. For purposes of pattern generation, it would be desirable to utilize the digital controllability of a finely-focused ion beam. In this report, the suitability of a finely-focused ion beam for the purpose of direct writing implantation has been investigated. For this study, an ion accelerator was equipped with a computer-controlled fine-focusing system. Using this system it was possible to implant Van-der-Pauw test structures, resistors, and bipolar transistors, which were then electrically measured. The smallest line width was approx. 1 μm. A disadvantage is represented by the long implantation times resulting with present ion sources. Another VLSI-relevant area of application for this finely-focused ion-beam-writing system is photoresist exposure, as an alternative to electron-beam lithography, making possible the realization of very small structures without proximity effects and with a significantly higher resist sensitivity. (orig.) [de

  5. Progress in heavy ion driven inertial fusion energy: From scaled experiments to the integrated research experiment

    International Nuclear Information System (INIS)

    Barnard, J.J.; Ahle, L.E.; Baca, D.; Bangerter, R.O.; Bieniosek, F.M.; Celata, C.M.; Chacon-Golcher, E.; Davidson, R.C.; Faltens, A.; Friedman, A.; Franks, R.M.; Grote, D.P.; Haber, I.; Henestroza, E.; Hoon, M.J.L. de; Kaganovich, I.; Karpenko, V.P.; Kishek, R.A.; Kwan, J.W.; Lee, E.P.; Logan, B.G.; Lund, S.M.; Meier, W.R.; Molvik, A.W.; Olson, C.; Prost, L.R.; Qin, H.; Rose, D.; Sabbi, G.-L.; Sangster, T.C.; Seidl, P.A.; Sharp, W.M.; Shuman, D.; Vay, J.-L.; Waldron, W.L.; Welch, D.; Yu, S.S.

    2001-01-01

    The promise of inertial fusion energy driven by heavy ion beams requires the development of accelerators that produce ion currents (∼100's Amperes/beam) and ion energies (∼1-10 GeV) that have not been achieved simultaneously in any existing accelerator. The high currents imply high generalized perveances, large tune depressions, and high space charge potentials of the beam center relative to the beam pipe. Many of the scientific issues associated with ion beams of high perveance and large tune depression have been addressed over the last two decades on scaled experiments at Lawrence Berkeley and Lawrence Livermore National Laboratories, the University of Maryland, and elsewhere. The additional requirement of high space charge potential (or equivalently high line charge density) gives rise to effects (particularly the role of electrons in beam transport) which must be understood before proceeding to a large scale accelerator. The first phase of a new series of experiments in Heavy Ion Fusion Virtual National Laboratory (HIF VNL), the High Current Experiments (HCX), is now being constructed at LBNL. The mission of the HCX will be to transport beams with driver line charge density so as to investigate the physics of this regime, including constraints on the maximum radial filling factor of the beam through the pipe. This factor is important for determining both cost and reliability of a driver scale accelerator. The HCX will provide data for design of the next steps in the sequence of experiments leading to an inertial fusion energy power plant. The focus of the program after the HCX will be on integration of all of the manipulations required for a driver. In the near term following HCX, an Integrated Beam Experiment (IBX) of the same general scale as the HCX is envisioned. The step which bridges the gap between the IBX and an engineering test facility for fusion has been designated the Integrated Research Experiment (IRE). The IRE (like the IBX) will provide an

  6. Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2018-03-27

    Algorithmic and architecture-oriented optimizations are essential for achieving performance worthy of anticipated energy-austere exascale systems. In this paper, we present an extreme scale FMM-accelerated boundary integral equation solver for wave scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory, targeting emerging Intel extreme performance HPC architectures. We extract the potential thread- and data-level parallelism of the key Helmholtz kernels of FMM. Our application code is well optimized to exploit the AVX-512 SIMD units of Intel Skylake and Knights Landing architectures. We provide different performance models for tuning the task-based tree traversal implementation of FMM, and develop optimal architecture-specific and algorithm aware partitioning, load balancing, and communication reducing mechanisms to scale up to 6,144 compute nodes of a Cray XC40 with 196,608 hardware cores. With shared memory optimizations, we achieve roughly 77% of peak single precision floating point performance of a 56-core Skylake processor, and on average 60% of peak single precision floating point performance of a 72-core KNL. These numbers represent nearly 5.4x and 10x speedup on Skylake and KNL, respectively, compared to the baseline scalar code. With distributed memory optimizations, on the other hand, we report near-optimal efficiency in the weak scalability study with respect to both the logarithmic communication complexity as well as the theoretical scaling complexity of FMM. In addition, we exhibit up to 85% efficiency in strong scaling. We compute in excess of 2 billion DoF on the full-scale of the Cray XC40 supercomputer.

  7. Do large-scale assessments measure students' ability to integrate scientific knowledge?

    Science.gov (United States)

    Lee, Hee-Sun

    2010-03-01

    Large-scale assessments are used as means to diagnose the current status of student achievement in science and compare students across schools, states, and countries. For efficiency, multiple-choice items and dichotomously-scored open-ended items are pervasively used in large-scale assessments such as Trends in International Math and Science Study (TIMSS). This study investigated how well these items measure secondary school students' ability to integrate scientific knowledge. This study collected responses of 8400 students to 116 multiple-choice and 84 open-ended items and applied an Item Response Theory analysis based on the Rasch Partial Credit Model. Results indicate that most multiple-choice items and dichotomously-scored open-ended items can be used to determine whether students have normative ideas about science topics, but cannot measure whether students integrate multiple pieces of relevant science ideas. Only when the scoring rubric is redesigned to capture subtle nuances of student open-ended responses, open-ended items become a valid and reliable tool to assess students' knowledge integration ability.

  8. Two-scale large deviations for chemical reaction kinetics through second quantization path integral

    International Nuclear Information System (INIS)

    Li, Tiejun; Lin, Feng

    2016-01-01

    Motivated by the study of rare events for a typical genetic switching model in systems biology, in this paper we aim to establish the general two-scale large deviations for chemical reaction systems. We build a formal approach to explicitly obtain the large deviation rate functionals for the considered two-scale processes based upon the second quantization path integral technique. We get three important types of large deviation results when the underlying two timescales are in three different regimes. This is realized by singular perturbation analysis to the rate functionals obtained by the path integral. We find that the three regimes possess the same deterministic mean-field limit but completely different chemical Langevin approximations. The obtained results are natural extensions of the classical large volume limit for chemical reactions. We also discuss its implication on the single-molecule Michaelis–Menten kinetics. Our framework and results can be applied to understand general multi-scale systems including diffusion processes. (paper)

  9. An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana

    Directory of Open Access Journals (Sweden)

    Niladri Basu

    2015-09-01

    Full Text Available Artisanal and small-scale gold mining (ASGM is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.

  10. An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana.

    Science.gov (United States)

    Basu, Niladri; Renne, Elisha P; Long, Rachel N

    2015-09-17

    Artisanal and small-scale gold mining (ASGM) is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics) were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.

  11. MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES

    Directory of Open Access Journals (Sweden)

    Y. Di

    2017-05-01

    Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.

  12. An innovative large scale integration of silicon nanowire-based field effect transistors

    Science.gov (United States)

    Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.

    2018-05-01

    Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.

  13. Scaling Analysis Techniques to Establish Experimental Infrastructure for Component, Subsystem, and Integrated System Testing

    Energy Technology Data Exchange (ETDEWEB)

    Sabharwall, Piyush [Idaho National Laboratory (INL), Idaho Falls, ID (United States); O' Brien, James E. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); McKellar, Michael G. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Housley, Gregory K. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Bragg-Sitton, Shannon M. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2015-03-01

    Hybrid energy system research has the potential to expand the application for nuclear reactor technology beyond electricity. The purpose of this research is to reduce both technical and economic risks associated with energy systems of the future. Nuclear hybrid energy systems (NHES) mitigate the variability of renewable energy sources, provide opportunities to produce revenue from different product streams, and avoid capital inefficiencies by matching electrical output to demand by using excess generation capacity for other purposes when it is available. An essential step in the commercialization and deployment of this advanced technology is scaled testing to demonstrate integrated dynamic performance of advanced systems and components when risks cannot be mitigated adequately by analysis or simulation. Further testing in a prototypical environment is needed for validation and higher confidence. This research supports the development of advanced nuclear reactor technology and NHES, and their adaptation to commercial industrial applications that will potentially advance U.S. energy security, economy, and reliability and further reduce carbon emissions. Experimental infrastructure development for testing and feasibility studies of coupled systems can similarly support other projects having similar developmental needs and can generate data required for validation of models in thermal energy storage and transport, energy, and conversion process development. Experiments performed in the Systems Integration Laboratory will acquire performance data, identify scalability issues, and quantify technology gaps and needs for various hybrid or other energy systems. This report discusses detailed scaling (component and integrated system) and heat transfer figures of merit that will establish the experimental infrastructure for component, subsystem, and integrated system testing to advance the technology readiness of components and systems to the level required for commercial

  14. 300 Area Integrated Field-Scale Subsurface Research Challenge (IFRC) Field Site Management Plan

    Energy Technology Data Exchange (ETDEWEB)

    Freshley, Mark D.

    2008-12-31

    Pacific Northwest National Laboratory (PNNL) has established the 300 Area Integrated Field-Scale Subsurface Research Challenge (300 Area IFRC) on the Hanford Site in southeastern Washington State for the U.S. Department of Energy’s (DOE) Office of Biological and Environmental Research (BER) within the Office of Science. The project is funded by the Environmental Remediation Sciences Division (ERSD). The purpose of the project is to conduct research at the 300 IFRC to investigate multi-scale mass transfer processes associated with a subsurface uranium plume impacting both the vadose zone and groundwater. The management approach for the 300 Area IFRC requires that a Field Site Management Plan be developed. This is an update of the plan to reflect the installation of the well network and other changes.

  15. Graduate Curriculum for Biological Information Specialists: A Key to Integration of Scale in Biology

    Directory of Open Access Journals (Sweden)

    Carole L. Palmer

    2007-12-01

    Full Text Available Scientific data problems do not stand in isolation. They are part of a larger set of challenges associated with the escalation of scientific information and changes in scholarly communication in the digital environment. Biologists in particular are generating enormous sets of data at a high rate, and new discoveries in the biological sciences will increasingly depend on the integration of data across multiple scales. This work will require new kinds of information expertise in key areas. To build this professional capacity we have developed two complementary educational programs: a Biological Information Specialist (BIS masters degree and a concentration in Data Curation (DC. We believe that BISs will be central in the development of cyberinfrastructure and information services needed to facilitate interdisciplinary and multi-scale science. Here we present three sample cases from our current research projects to illustrate areas in which we expect information specialists to make important contributions to biological research practice.

  16. On the mass-coupling relation of multi-scale quantum integrable models

    Energy Technology Data Exchange (ETDEWEB)

    Bajnok, Zoltán; Balog, János [MTA Lendület Holographic QFT Group, Wigner Research Centre,H-1525 Budapest 114, P.O.B. 49 (Hungary); Ito, Katsushi [Department of Physics, Tokyo Institute of Technology,2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 (Japan); Satoh, Yuji [Institute of Physics, University of Tsukuba,1-1-1 Tennodai, Tsukuba, Ibaraki 305-8571 (Japan); Tóth, Gábor Zsolt [MTA Lendület Holographic QFT Group, Wigner Research Centre,H-1525 Budapest 114, P.O.B. 49 (Hungary)

    2016-06-13

    We determine exactly the mass-coupling relation for the simplest multi-scale quantum integrable model, the homogenous sine-Gordon model with two independent mass-scales. We first reformulate its perturbed coset CFT description in terms of the perturbation of a projected product of minimal models. This representation enables us to identify conserved tensor currents on the UV side. These UV operators are then mapped via form factor perturbation theory to operators on the IR side, which are characterized by their form factors. The relation between the UV and IR operators is given in terms of the sought-for mass-coupling relation. By generalizing the Θ sum rule Ward identity we are able to derive differential equations for the mass-coupling relation, which we solve in terms of hypergeometric functions. We check these results against the data obtained by numerically solving the thermodynamic Bethe Ansatz equations, and find a complete agreement.

  17. Dynamic model of frequency control in Danish power system with large scale integration of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2013-01-01

    This work evaluates the impact of large scale integration of wind power in future power systems when 50% of load demand can be met from wind power. The focus is on active power balance control, where the main source of power imbalance is an inaccurate wind speed forecast. In this study, a Danish...... power system model with large scale of wind power is developed and a case study for an inaccurate wind power forecast is investigated. The goal of this work is to develop an adequate power system model that depicts relevant dynamic features of the power plants and compensates for load generation...... imbalances, caused by inaccurate wind speed forecast, by an appropriate control of the active power production from power plants....

  18. Hierarchical hybrid control of manipulators: Artificial intelligence in large scale integrated circuits

    Science.gov (United States)

    Greene, P. H.

    1972-01-01

    Both in practical engineering and in control of muscular systems, low level subsystems automatically provide crude approximations to the proper response. Through low level tuning of these approximations, the proper response variant can emerge from standardized high level commands. Such systems are expressly suited to emerging large scale integrated circuit technology. A computer, using symbolic descriptions of subsystem responses, can select and shape responses of low level digital or analog microcircuits. A mathematical theory that reveals significant informational units in this style of control and software for realizing such information structures are formulated.

  19. A unified double-loop multi-scale control strategy for NMP integrating-unstable systems

    International Nuclear Information System (INIS)

    Seer, Qiu Han; Nandong, Jobrun

    2016-01-01

    This paper presents a new control strategy which unifies the direct and indirect multi-scale control schemes via a double-loop control structure. This unified control strategy is proposed for controlling a class of highly nonminimum-phase processes having both integrating and unstable modes. This type of systems is often encountered in fed-batch fermentation processes which are very difficult to stabilize via most of the existing well-established control strategies. A systematic design procedure is provided where its applicability is demonstrated via a numerical example. (paper)

  20. Integration of Genome Scale Metabolic Networks and Gene Regulation of Metabolic Enzymes With Physiologically Based Pharmacokinetics.

    Science.gov (United States)

    Maldonado, Elaina M; Leoncikas, Vytautas; Fisher, Ciarán P; Moore, J Bernadette; Plant, Nick J; Kierzek, Andrzej M

    2017-11-01

    The scope of physiologically based pharmacokinetic (PBPK) modeling can be expanded by assimilation of the mechanistic models of intracellular processes from systems biology field. The genome scale metabolic networks (GSMNs) represent a whole set of metabolic enzymes expressed in human tissues. Dynamic models of the gene regulation of key drug metabolism enzymes are available. Here, we introduce GSMNs and review ongoing work on integration of PBPK, GSMNs, and metabolic gene regulation. We demonstrate example models. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  1. Deciphering the clinical effect of drugs through large-scale data integration

    DEFF Research Database (Denmark)

    Kjærulff, Sonny Kim

    . This work demonstrates the power of a strategy that uses clinical data mining in association with chemical biology in order to reduce the search space and aid identification of novel drug actions. The second article described in chapter 3 outlines a high confidence side-effect-drug interaction dataset. We...... demonstrates the importance of using high-confidence drug-side-effect data in deciphering the effect of small molecules in humans. In summary, this thesis presents computational systems chemical biology approaches that can help identify clinical effects of small molecules through large-scale data integration...

  2. Large scale mapping of groundwater resources using a highly integrated set of tools

    DEFF Research Database (Denmark)

    Søndergaard, Verner; Auken, Esben; Christiansen, Anders Vest

    large areas with information from an optimum number of new investigation boreholes, existing boreholes, logs and water samples to get an integrated and detailed description of the groundwater resources and their vulnerability.Development of more time efficient and airborne geophysical data acquisition...... platforms (e.g. SkyTEM) have made large-scale mapping attractive and affordable in the planning and administration of groundwater resources. The handling and optimized use of huge amounts of geophysical data covering large areas has also required a comprehensive database, where data can easily be stored...

  3. Climate warming, marine protected areas and the ocean-scale integrity of coral reef ecosystems.

    Directory of Open Access Journals (Sweden)

    Nicholas A J Graham

    Full Text Available Coral reefs have emerged as one of the ecosystems most vulnerable to climate variation and change. While the contribution of a warming climate to the loss of live coral cover has been well documented across large spatial and temporal scales, the associated effects on fish have not. Here, we respond to recent and repeated calls to assess the importance of local management in conserving coral reefs in the context of global climate change. Such information is important, as coral reef fish assemblages are the most species dense vertebrate communities on earth, contributing critical ecosystem functions and providing crucial ecosystem services to human societies in tropical countries. Our assessment of the impacts of the 1998 mass bleaching event on coral cover, reef structural complexity, and reef associated fishes spans 7 countries, 66 sites and 26 degrees of latitude in the Indian Ocean. Using Bayesian meta-analysis we show that changes in the size structure, diversity and trophic composition of the reef fish community have followed coral declines. Although the ocean scale integrity of these coral reef ecosystems has been lost, it is positive to see the effects are spatially variable at multiple scales, with impacts and vulnerability affected by geography but not management regime. Existing no-take marine protected areas still support high biomass of fish, however they had no positive affect on the ecosystem response to large-scale disturbance. This suggests a need for future conservation and management efforts to identify and protect regional refugia, which should be integrated into existing management frameworks and combined with policies to improve system-wide resilience to climate variation and change.

  4. Towards integrated modelling of soil organic carbon cycling at landscape scale

    Science.gov (United States)

    Viaud, V.

    2009-04-01

    Soil organic carbon (SOC) is recognized as a key factor of the chemical, biological and physical quality of soil. Numerous models of soil organic matter turnover have been developed since the 1930ies, most of them dedicated to plot scale applications. More recently, they have been applied to national scales to establish the inventories of carbon stocks directed by the Kyoto protocol. However, only few studies consider the intermediate landscape scale, where the spatio-temporal pattern of land management practices, its interactions with the physical environment and its impacts on SOC dynamics can be investigated to provide guidelines for sustainable management of soils in agricultural areas. Modelling SOC cycling at this scale requires accessing accurate spatially explicit input data on soils (SOC content, bulk density, depth, texture) and land use (land cover, farm practices), and combining both data in a relevant integrated landscape representation. The purpose of this paper is to present a first approach to modelling SOC evolution in a small catchment. The impact of the way landscape is represented on SOC stocks in the catchment was more specifically addressed. This study was based on the field map, the soil survey, the crop rotations and land management practices of an actual 10-km² agricultural catchment located in Brittany (France). RothC model was used to drive soil organic matter dynamics. Landscape representation in the form of a systematic regular grid, where driving properties vary continuously in space, was compared to a representation where landscape is subdivided into a set of homogeneous geographical units. This preliminary work enabled to identify future needs to improve integrated soil-landscape modelling in agricultural areas.

  5. Climate warming, marine protected areas and the ocean-scale integrity of coral reef ecosystems.

    Science.gov (United States)

    Graham, Nicholas A J; McClanahan, Tim R; MacNeil, M Aaron; Wilson, Shaun K; Polunin, Nicholas V C; Jennings, Simon; Chabanet, Pascale; Clark, Susan; Spalding, Mark D; Letourneur, Yves; Bigot, Lionel; Galzin, René; Ohman, Marcus C; Garpe, Kajsa C; Edwards, Alasdair J; Sheppard, Charles R C

    2008-08-27

    Coral reefs have emerged as one of the ecosystems most vulnerable to climate variation and change. While the contribution of a warming climate to the loss of live coral cover has been well documented across large spatial and temporal scales, the associated effects on fish have not. Here, we respond to recent and repeated calls to assess the importance of local management in conserving coral reefs in the context of global climate change. Such information is important, as coral reef fish assemblages are the most species dense vertebrate communities on earth, contributing critical ecosystem functions and providing crucial ecosystem services to human societies in tropical countries. Our assessment of the impacts of the 1998 mass bleaching event on coral cover, reef structural complexity, and reef associated fishes spans 7 countries, 66 sites and 26 degrees of latitude in the Indian Ocean. Using Bayesian meta-analysis we show that changes in the size structure, diversity and trophic composition of the reef fish community have followed coral declines. Although the ocean scale integrity of these coral reef ecosystems has been lost, it is positive to see the effects are spatially variable at multiple scales, with impacts and vulnerability affected by geography but not management regime. Existing no-take marine protected areas still support high biomass of fish, however they had no positive affect on the ecosystem response to large-scale disturbance. This suggests a need for future conservation and management efforts to identify and protect regional refugia, which should be integrated into existing management frameworks and combined with policies to improve system-wide resilience to climate variation and change.

  6. INTEGRATED IMAGING APPROACHES SUPPORTING THE EXCAVATION ACTIVITIES. MULTI-SCALE GEOSPATIAL DOCUMENTATION IN HIERAPOLIS (TK

    Directory of Open Access Journals (Sweden)

    A. Spanò

    2018-05-01

    Full Text Available The paper focuses on the exploration of the suitability and the discretization of applicability issues about advanced surveying integrated techniques, mainly based on image-based approaches compared and integrated to range-based ones that have been developed with the use of the cutting-edge solutions tested on field. The investigated techniques integrate both technological devices for 3D data acquisition and thus editing and management systems to handle metric models and multi-dimensional data in a geospatial perspective, in order to innovate and speed up the extraction of information during the archaeological excavation activities. These factors, have been experienced in the outstanding site of the Hierapolis of Phrygia ancient city (Turkey, downstream the 2017 surveying missions, in order to produce high-scale metric deliverables in terms of high-detailed Digital Surface Models (DSM, 3D continuous surface models and high-resolution orthoimages products. In particular, the potentialities in the use of UAV platforms for low altitude acquisitions in aerial photogrammetric approach, together with terrestrial panoramic acquisitions (Trimble V10 imaging rover, have been investigated with a comparison toward consolidated Terrestrial Laser Scanning (TLS measurements. One of the main purposes of the paper is to evaluate the results offered by the technologies used independently and using integrated approaches. A section of the study in fact, is specifically dedicated to experimenting the union of different sensor dense clouds: both dense clouds derived from UAV have been integrated with terrestrial Lidar clouds, to evaluate their fusion. Different test cases have been considered, representing typical situations that can be encountered in archaeological sites.

  7. Medium-scale carbon nanotube thin-film integrated circuits on flexible plastic substrates.

    Science.gov (United States)

    Cao, Qing; Kim, Hoon-sik; Pimparkar, Ninad; Kulkarni, Jaydeep P; Wang, Congjun; Shim, Moonsub; Roy, Kaushik; Alam, Muhammad A; Rogers, John A

    2008-07-24

    The ability to form integrated circuits on flexible sheets of plastic enables attributes (for example conformal and flexible formats and lightweight and shock resistant construction) in electronic devices that are difficult or impossible to achieve with technologies that use semiconductor wafers or glass plates as substrates. Organic small-molecule and polymer-based materials represent the most widely explored types of semiconductors for such flexible circuitry. Although these materials and those that use films or nanostructures of inorganics have promise for certain applications, existing demonstrations of them in circuits on plastic indicate modest performance characteristics that might restrict the application possibilities. Here we report implementations of a comparatively high-performance carbon-based semiconductor consisting of sub-monolayer, random networks of single-walled carbon nanotubes to yield small- to medium-scale integrated digital circuits, composed of up to nearly 100 transistors on plastic substrates. Transistors in these integrated circuits have excellent properties: mobilities as high as 80 cm(2) V(-1) s(-1), subthreshold slopes as low as 140 m V dec(-1), operating voltages less than 5 V together with deterministic control over the threshold voltages, on/off ratios as high as 10(5), switching speeds in the kilohertz range even for coarse (approximately 100-microm) device geometries, and good mechanical flexibility-all with levels of uniformity and reproducibility that enable high-yield fabrication of integrated circuits. Theoretical calculations, in contexts ranging from heterogeneous percolative transport through the networks to compact models for the transistors to circuit level simulations, provide quantitative and predictive understanding of these systems. Taken together, these results suggest that sub-monolayer films of single-walled carbon nanotubes are attractive materials for flexible integrated circuits, with many potential areas of

  8. Large scale integration of intermittent renewable energy sources in the Greek power sector

    International Nuclear Information System (INIS)

    Voumvoulakis, Emmanouil; Asimakopoulou, Georgia; Danchev, Svetoslav; Maniatis, George; Tsakanikas, Aggelos

    2012-01-01

    As a member of the European Union, Greece has committed to achieve ambitious targets for the penetration of renewable energy sources (RES) in gross electricity consumption by 2020. Large scale integration of RES requires a suitable mixture of compatible generation units, in order to deal with the intermittency of wind velocity and solar irradiation. The scope of this paper is to examine the impact of large scale integration of intermittent energy sources, required to meet the 2020 RES target, on the generation expansion plan, the fuel mix and the spinning reserve requirements of the Greek electricity system. We perform hourly simulation of the intermittent RES generation to estimate residual load curves on a monthly basis, which are then inputted in a WASP-IV model of the Greek power system. We find that the decarbonisation effort, with the rapid entry of RES and the abolishment of the grandfathering of CO 2 allowances, will radically transform the Greek electricity sector over the next 10 years, which has wide-reaching policy implications. - Highlights: ► Greece needs 8.8 to 9.3 GW additional RES installations by 2020. ► RES capacity credit varies between 12.2% and 15.3%, depending on interconnections. ► Without institutional changes, the reserve requirements will be more than double. ► New CCGT installed capacity will probably exceed the cost-efficient level. ► Competitive pressures should be introduced in segments other than day-ahead market.

  9. Pesticide fate at regional scale: Development of an integrated model approach and application

    Science.gov (United States)

    Herbst, M.; Hardelauf, H.; Harms, R.; Vanderborght, J.; Vereecken, H.

    As a result of agricultural practice many soils and aquifers are contaminated with pesticides. In order to quantify the side-effects of these anthropogenic impacts on groundwater quality at regional scale, a process-based, integrated model approach was developed. The Richards’ equation based numerical model TRACE calculates the three-dimensional saturated/unsaturated water flow. For the modeling of regional scale pesticide transport we linked TRACE with the plant module SUCROS and with 3DLEWASTE, a hybrid Lagrangian/Eulerian approach to solve the convection/dispersion equation. We used measurements, standard methods like pedotransfer-functions or parameters from literature to derive the model input for the process model. A first-step application of TRACE/3DLEWASTE to the 20 km 2 test area ‘Zwischenscholle’ for the period 1983-1993 reveals the behaviour of the pesticide isoproturon. The selected test area is characterised by an intense agricultural use and shallow groundwater, resulting in a high vulnerability of the groundwater to pesticide contamination. The model results stress the importance of the unsaturated zone for the occurrence of pesticides in groundwater. Remarkable isoproturon concentrations in groundwater are predicted for locations with thin layered and permeable soils. For four selected locations we used measured piezometric heads to validate predicted groundwater levels. In general, the model results are consistent and reasonable. Thus the developed integrated model approach is seen as a promising tool for the quantification of the agricultural practice impact on groundwater quality.

  10. Semantic Representation and Scale-Up of Integrated Air Traffic Management Data

    Science.gov (United States)

    Keller, Richard M.; Ranjan, Shubha; Wei, Mie; Eshow, Michelle

    2016-01-01

    Each day, the global air transportation industry generates a vast amount of heterogeneous data from air carriers, air traffic control providers, and secondary aviation entities handling baggage, ticketing, catering, fuel delivery, and other services. Generally, these data are stored in isolated data systems, separated from each other by significant political, regulatory, economic, and technological divides. These realities aside, integrating aviation data into a single, queryable, big data store could enable insights leading to major efficiency, safety, and cost advantages. In this paper, we describe an implemented system for combining heterogeneous air traffic management data using semantic integration techniques. The system transforms data from its original disparate source formats into a unified semantic representation within an ontology-based triple store. Our initial prototype stores only a small sliver of air traffic data covering one day of operations at a major airport. The paper also describes our analysis of difficulties ahead as we prepare to scale up data storage to accommodate successively larger quantities of data -- eventually covering all US commercial domestic flights over an extended multi-year timeframe. We review several approaches to mitigating scale-up related query performance concerns.

  11. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  12. New domain for image analysis: VLSI circuits testing, with Romuald, specialized in parallel image processing

    Energy Technology Data Exchange (ETDEWEB)

    Rubat Du Merac, C; Jutier, P; Laurent, J; Courtois, B

    1983-07-01

    This paper describes some aspects of specifying, designing and evaluating a specialized machine, Romuald, for the capture, coding, and processing of video and scanning electron microscope (SEM) pictures. First the authors present the functional organization of the process unit of romuald and its hardware, giving details of its behaviour. Then they study the capture and display unit which, thanks to its flexibility, enables SEM images coding. Finally, they describe an application which is now being developed in their laboratory: testing VLSI circuits with new methods: sem+voltage contrast and image processing. 15 references.

  13. Vlsi implementation of flexible architecture for decision tree classification in data mining

    Science.gov (United States)

    Sharma, K. Venkatesh; Shewandagn, Behailu; Bhukya, Shankar Nayak

    2017-07-01

    The Data mining algorithms have become vital to researchers in science, engineering, medicine, business, search and security domains. In recent years, there has been a terrific raise in the size of the data being collected and analyzed. Classification is the main difficulty faced in data mining. In a number of the solutions developed for this problem, most accepted one is Decision Tree Classification (DTC) that gives high precision while handling very large amount of data. This paper presents VLSI implementation of flexible architecture for Decision Tree classification in data mining using c4.5 algorithm.

  14. FILTRES: a 128 channels VLSI mixed front-end readout electronic development for microstrip detectors

    International Nuclear Information System (INIS)

    Anstotz, F.; Hu, Y.; Michel, J.; Sohler, J.L.; Lachartre, D.

    1998-01-01

    We present a VLSI digital-analog readout electronic chain for silicon microstrip detectors. The characteristics of this circuit have been optimized for the high resolution tracker of the CERN CMS experiment. This chip consists of 128 channels at 50 μm pitch. Each channel is composed by a charge amplifier, a CR-RC shaper, an analog memory, an analog processor, an output FIFO read out serially by a multiplexer. This chip has been processed in the radiation hard technology DMILL. This paper describes the architecture of the circuit and presents test results of the 128 channel full chain chip. (orig.)

  15. Techniques for Computing the DFT Using the Residue Fermat Number Systems and VLSI

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.

    1985-01-01

    The integer complex multiplier and adder over the direct sum of two copies of a finite field is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed for the DFT can be reduced substantially over the previous approach. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  16. Prediction of irradiation damage effects by multi-scale modelling: EURATOM 3 Framework integrated project perfect

    International Nuclear Information System (INIS)

    Massoud, J.P.; Bugat, St.; Marini, B.; Lidbury, D.; Van Dyck, St.; Debarberis, L.

    2008-01-01

    Full text of publication follows. In nuclear PWRs, materials undergo degradation due to severe irradiation conditions that may limit their operational life. Utilities operating these reactors must quantify the aging and the potential degradations of reactor pressure vessels and also of internal structures to ensure safe and reliable plant operation. The EURATOM 6. Framework Integrated Project PERFECT (Prediction of Irradiation Damage Effects in Reactor Components) addresses irradiation damage in RPV materials and components by multi-scale modelling. This state-of-the-art approach offers potential advantages over the conventional empirical methods used in current practice of nuclear plant lifetime management. Launched in January 2004, this 48-month project is focusing on two main components of nuclear power plants which are subject to irradiation damage: the ferritic steel reactor pressure vessel and the austenitic steel internals. This project is also an opportunity to integrate the fragmented research and experience that currently exists within Europe in the field of numerical simulation of radiation damage and creates the links with international organisations involved in similar projects throughout the world. Continuous progress in the physical understanding of the phenomena involved in irradiation damage and continuous progress in computer sciences make possible the development of multi-scale numerical tools able to simulate the effects of irradiation on materials microstructure. The consequences of irradiation on mechanical and corrosion properties of materials are also tentatively modelled using such multi-scale modelling. But it requires to develop different mechanistic models at different levels of physics and engineering and to extend the state of knowledge in several scientific fields. And the links between these different kinds of models are particularly delicate to deal with and need specific works. Practically the main objective of PERFECT is to build

  17. Hydrologic connectivity and the contribution of stream headwaters to ecological integrity at regional scales

    Science.gov (United States)

    Freeman, Mary C.; Pringle, C.M.; Jackson, C.R.

    2007-01-01

    Cumulatively, headwater streams contribute to maintaining hydrologic connectivity and ecosystem integrity at regional scales. Hydrologic connectivity is the water-mediated transport of matter, energy and organisms within or between elements of the hydrologic cycle. Headwater streams compose over two-thirds of total stream length in a typical river drainage and directly connect the upland and riparian landscape to the rest of the stream ecosystem. Altering headwater streams, e.g., by channelization, diversion through pipes, impoundment and burial, modifies fluxes between uplands and downstream river segments and eliminates distinctive habitats. The large-scale ecological effects of altering headwaters are amplified by land uses that alter runoff and nutrient loads to streams, and by widespread dam construction on larger rivers (which frequently leaves free-flowing upstream portions of river systems essential to sustaining aquatic biodiversity). We discuss three examples of large-scale consequences of cumulative headwater alteration. Downstream eutrophication and coastal hypoxia result, in part, from agricultural practices that alter headwaters and wetlands while increasing nutrient runoff. Extensive headwater alteration is also expected to lower secondary productivity of river systems by reducing stream-system length and trophic subsidies to downstream river segments, affecting aquatic communities and terrestrial wildlife that utilize aquatic resources. Reduced viability of freshwater biota may occur with cumulative headwater alteration, including for species that occupy a range of stream sizes but for which headwater streams diversify the network of interconnected populations or enhance survival for particular life stages. Developing a more predictive understanding of ecological patterns that may emerge on regional scales as a result of headwater alterations will require studies focused on components and pathways that connect headwaters to river, coastal and

  18. Energy System Analysis of Large-Scale Integration of Wind Power

    International Nuclear Information System (INIS)

    Lund, Henrik

    2003-11-01

    The paper presents the results of two research projects conducted by Aalborg University and financed by the Danish Energy Research Programme. Both projects include the development of models and system analysis with focus on large-scale integration of wind power into different energy systems. Market reactions and ability to exploit exchange on the international market for electricity by locating exports in hours of high prices are included in the analyses. This paper focuses on results which are valid for energy systems in general. The paper presents the ability of different energy systems and regulation strategies to integrate wind power, The ability is expressed by three factors: One factor is the degree of electricity excess production caused by fluctuations in wind and CHP heat demands. The other factor is the ability to utilise wind power to reduce CO 2 emission in the system. And the third factor is the ability to benefit from exchange of electricity on the market. Energy systems and regulation strategies are analysed in the range of a wind power input from 0 to 100% of the electricity demand. Based on the Danish energy system, in which 50 per cent of the electricity demand is produced in CHP, a number of future energy systems with CO 2 reduction potentials are analysed, i.e. systems with more CHP, systems using electricity for transportation (battery or hydrogen vehicles) and systems with fuel-cell technologies. For the present and such potential future energy systems different regulation strategies have been analysed, i.e. the inclusion of small CHP plants into the regulation task of electricity balancing and grid stability and investments in electric heating, heat pumps and heat storage capacity. Also the potential of energy management has been analysed. The results of the analyses make it possible to compare short-term and long-term potentials of different strategies of large-scale integration of wind power

  19. An integrated model to simulate sown area changes for major crops at a global scale

    Institute of Scientific and Technical Information of China (English)

    SHIBASAKI; Ryosuke

    2008-01-01

    Dynamics of land use systems have attracted much attention from scientists around the world due to their ecological and socio-economic implications. An integrated model to dynamically simulate future changes in sown areas of four major crops (rice, maize, wheat and soybean) on a global scale is pre- sented. To do so, a crop choice model was developed on the basis of Multinomial Logit (Logit) model to model land users’ decisions on crop choices among a set of available alternatives with using a crop utility function. A GIS-based Environmental Policy Integrated Climate (EPIC) model was adopted to simulate the crop yields under a given geophysical environment and farming management conditions, while the International Food Policy and Agricultural Simulation (IFPSIM) model was utilized to estimate crop price in the international market. The crop choice model was linked with the GIS-based EPIC model and the IFPSIM model through data exchange. This integrated model was then validated against the FAO statistical data in 2001-2003 and the Moderate Resolution Imaging Spectroradiometer (MODIS) global land cover product in 2001. Both validation approaches indicated reliability of the model for ad- dressing the dynamics in agricultural land use and its capability for long-term scenario analysis. Finally, the model application was designed to run over a time period of 30 a, taking the year 2000 as baseline. The model outcomes can help understand and explain the causes, locations and consequences of land use changes, and provide support for land use planning and policy making.

  20. Integrating statistical and process-based models to produce probabilistic landslide hazard at regional scale

    Science.gov (United States)

    Strauch, R. L.; Istanbulluoglu, E.

    2017-12-01

    We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.

  1. A 3D bioprinting system to produce human-scale tissue constructs with structural integrity.

    Science.gov (United States)

    Kang, Hyun-Wook; Lee, Sang Jin; Ko, In Kap; Kengla, Carlos; Yoo, James J; Atala, Anthony

    2016-03-01

    A challenge for tissue engineering is producing three-dimensional (3D), vascularized cellular constructs of clinically relevant size, shape and structural integrity. We present an integrated tissue-organ printer (ITOP) that can fabricate stable, human-scale tissue constructs of any shape. Mechanical stability is achieved by printing cell-laden hydrogels together with biodegradable polymers in integrated patterns and anchored on sacrificial hydrogels. The correct shape of the tissue construct is achieved by representing clinical imaging data as a computer model of the anatomical defect and translating the model into a program that controls the motions of the printer nozzles, which dispense cells to discrete locations. The incorporation of microchannels into the tissue constructs facilitates diffusion of nutrients to printed cells, thereby overcoming the diffusion limit of 100-200 μm for cell survival in engineered tissues. We demonstrate capabilities of the ITOP by fabricating mandible and calvarial bone, cartilage and skeletal muscle. Future development of the ITOP is being directed to the production of tissues for human applications and to the building of more complex tissues and solid organs.

  2. FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.

    Science.gov (United States)

    Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  3. Integration and segregation of large-scale brain networks during short-term task automatization.

    Science.gov (United States)

    Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes

    2016-11-03

    The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes.

  4. Improving the integration of recreation management with management of other natural resources by applying concepts of scale from ecology

    Science.gov (United States)

    Wayde c. Morse; Troy E. Hall; Linda E. Kruger

    2008-01-01

    In this article, we examine how issues of scale affect the integration of recreation management with the management of other natural resources on public lands. We present two theories used to address scale issues in ecology and explore how they can improve the two most widely applied recreation-planning frameworks. The theory of patch dynamics and hierarchy theory are...

  5. Explicit Bounds to Some New Gronwall-Bellman-Type Delay Integral Inequalities in Two Independent Variables on Time Scales

    Directory of Open Access Journals (Sweden)

    Fanwei Meng

    2011-01-01

    Full Text Available Some new Gronwall-Bellman-type delay integral inequalities in two independent variables on time scales are established, which provide a handy tool in the research of qualitative and quantitative properties of solutions of delay dynamic equations on time scales. The established inequalities generalize some of the results in the work of Zhang and Meng 2008, Pachpatte 2002, and Ma 2010.

  6. Rapid Assemblers for Voxel-Based VLSI Robotics

    Science.gov (United States)

    2014-02-12

    flux vector given by the Nernst -Planck equation ( equation ), where the partial derivative of the concentration of ions with respect to time plus the...species i given by the Nernst -Einstein equation . The boundary conditions are that the diffusive and convective contribu- tions to the flux are zero at...dependent partial differential equations . SIAM Journal of Numerical Analysis, 32(3):797-823, 1995. Task 2: cm-scale voxels for prototypes Task

  7. Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics

    Science.gov (United States)

    Saeedi, Sara

    2018-06-01

    With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving

  8. European wind integration study (EWIS). Towards a successful integration of large scale wind power into European electricity grids. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Winter, W.

    2010-03-15

    Large capacities of wind generators have already been installed and are operating in Germany (26GW) and Spain (16GW). Installations which are as significant in terms of proportion to system size are also established in Denmark (3.3GW), the All Island Power System of Ireland and Northern Ireland (1.5GW), and Portugal (3.4GW). Many other countries expect significant growth in wind generation such that the total currently installed capacity in Europe of 68GW is expected to at least double by 2015. Yet further increases can be expected in order to achieve Europe's 2020 targets for renewable energy. The scale of this development poses big challenges for wind generation developers in terms of obtaining suitable sites, delivering large construction projects, and financing the associated investments from their operations. Such developments also impact the networks and it was to address the immediate transmission related challenges that the European Wind Integration Study (EWIS) was initiated by Transmission System Operators (TSOs) with the objective of ensuring the most effective integration of large scale wind generation into Europe's transmission networks and electricity system. The challenges anticipated and addressed include: 1) How to efficiently accommodate wind generation when markets and transmission access arrangements have evolved for the needs of traditional controllable generation. 2) How to ensure supplies remain secure as wind varies (establishing the required backup/reserves for low wind days and wind forecast errors as well as managing network congestion in windy conditions). 3) How to maintain the quality and reliability of supplies given the new generation characteristics. 4) How to achieve efficient network costs by suitable design and operation of network connections, the deeper infrastructure including offshore connections, and crossborder interconnections. EWIS has focused on the immediate network related challenges by analysing detailed

  9. Handbook of VLSI microlithography principles, technology and applications

    CERN Document Server

    Glendinning, William B

    1991-01-01

    This handbook gives readers a close look at the entire technology of printing very high resolution and high density integrated circuit (IC) patterns into thin resist process transfer coatings-- including optical lithography, electron beam, ion beam, and x-ray lithography. The book's main theme is the special printing process needed to achieve volume high density IC chip production, especially in the Dynamic Random Access Memory (DRAM) industry. The book leads off with a comparison of various lithography methods, covering the three major patterning parameters of line/space, resolution, line e

  10. VLSI for High-Speed Digital Signal Processing

    Science.gov (United States)

    1994-09-30

    particular, the design, layout and fab - rication of integrated circuits. The primary project for this grant has been the design and implementation of a...targeted at 33.36 dB, and PSNR (dB) Rate ( bpp ) the FRSBC algorithm, targeted at 0.5 bits/pixel, respec- Filter FDSBC FRSBC FDSBC FRSBC tively. The filter...to mean square error d by as shown in Fig. 6, is used, yielding a total of 16 subbands. 255’ The rates, in bits per pixel ( bpp ), and the peak signal

  11. Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-06-01

    Full Text Available Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational resources on selected FPGA development platform. This is because many other complex algorithms of an automated video surveillance system also run on the same platform. Keeping this key requirement as main focus, a memory efficient VLSI architecture for real-time motion detection and its implementation on FPGA platform is presented in this paper. This is accomplished by proposing a new memory efficient motion detection scheme and designing its VLSI architecture. The complete real-time motion detection system using the proposed memory efficient architecture along with proper input/output interfaces is implemented on Xilinx ML510 (Virtex-5 FX130T FPGA development platform and is capable of operating at 154.55 MHz clock frequency. Memory requirement of the proposed architecture is reduced by 41% compared to the standard clustering based motion detection architecture. The new memory efficient system robustly and automatically detects motion in real-world scenarios (both for the static backgrounds and the pseudo-stationary backgrounds in real-time for standard PAL (720 × 576 size color video.

  12. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    International Nuclear Information System (INIS)

    Jian Haifang; Shi Yin

    2009-01-01

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  13. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    Energy Technology Data Exchange (ETDEWEB)

    Jian Haifang; Shi Yin, E-mail: jhf@semi.ac.c [Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083 (China)

    2009-07-15

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  14. A multi coding technique to reduce transition activity in VLSI circuits

    International Nuclear Information System (INIS)

    Vithyalakshmi, N.; Rajaram, M.

    2014-01-01

    Advances in VLSI technology have enabled the implementation of complex digital circuits in a single chip, reducing system size and power consumption. In deep submicron low power CMOS VLSI design, the main cause of energy dissipation is charging and discharging of internal node capacitances due to transition activity. Transition activity is one of the major factors that also affect the dynamic power dissipation. This paper proposes power reduction analyzed through algorithm and logic circuit levels. In algorithm level the key aspect of reducing power dissipation is by minimizing transition activity and is achieved by introducing a data coding technique. So a novel multi coding technique is introduced to improve the efficiency of transition activity up to 52.3% on the bus lines, which will automatically reduce the dynamic power dissipation. In addition, 1 bit full adders are introduced in the Hamming distance estimator block, which reduces the device count. This coding method is implemented using Verilog HDL. The overall performance is analyzed by using Modelsim and Xilinx Tools. In total 38.2% power saving capability is achieved compared to other existing methods. (semiconductor technology)

  15. A Streaming PCA VLSI Chip for Neural Data Compression.

    Science.gov (United States)

    Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi

    2017-12-01

    Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.

  16. Full-Scale Field Test of a Blade-Integrated Dual-Telescope Wind Lidar

    DEFF Research Database (Denmark)

    Pedersen, Anders Tegtmeier; Sjöholm, Mikael; Angelou, Nikolas

    . Simultaneously, data regarding wind speed, rotational speed, and pitch angle recorded by the turbine was logged as well as data from a nearby met mast. The encouraging results of this first campaign include wind speed measurements at 20 Hz data rate along the rotor plane, acquired during the co...... in the top and bottom of the rotor plane. Conclusion We present here what we believe is the first successful wind speed measurements from a dual-telescope lidar installed on the blade of an operating wind turbine. The full-scale field test performed in the summer of 2012 has clearly demonstrated...... the possibility of integrating lidar telescopes into turbine blades as well as the capability of the lidar to measure the required wind speeds and to operate in the challenging environment of a rotating spinner and vibrating blade. The use of two separate telescopes allows a direct measurement of the blade’s AOA...

  17. Distributed constraint satisfaction for coordinating and integrating a large-scale, heterogenous enterprise

    CERN Document Server

    Eisenberg, C

    2003-01-01

    Market forces are continuously driving public and private organisations towards higher productivity, shorter process and production times, and fewer labour hours. To cope with these changes, organisations are adopting new organisational models of coordination and cooperation that increase their flexibility, consistency, efficiency, productivity and profit margins. In this thesis an organisational model of coordination and cooperation is examined using a real life example; the technical integration of a distributed large-scale project of an international physics collaboration. The distributed resource constraint project scheduling problem is modelled and solved with the methods of distributed constraint satisfaction. A distributed local search method, the distributed breakout algorithm (DisBO), is used as the basis for the coordination scheme. The efficiency of the local search method is improved by extending it with an incremental problem solving scheme with variable ordering. The scheme is implemented as cen...

  18. CLOSE RANGE HYPERSPECTRAL IMAGING INTEGRATED WITH TERRESTRIAL LIDAR SCANNING APPLIED TO ROCK CHARACTERISATION AT CENTIMETRE SCALE

    Directory of Open Access Journals (Sweden)

    T. H. Kurz

    2012-07-01

    Full Text Available Compact and lightweight hyperspectral imagers allow the application of close range hyperspectral imaging with a ground based scanning setup for geological fieldwork. Using such a scanning setup, steep cliff sections and quarry walls can be scanned with a more appropriate viewing direction and a higher image resolution than from airborne and spaceborne platforms. Integration of the hyperspectral imagery with terrestrial lidar scanning provides the hyperspectral information in a georeferenced framework and enables measurement at centimetre scale. In this paper, three geological case studies are used to demonstrate the potential of this method for rock characterisation. Two case studies are applied to carbonate quarries where mapping of different limestone and dolomite types was required, as well as measurements of faults and layer thicknesses from inaccessible parts of the quarries. The third case study demonstrates the method using artificial lighting, applied in a subsurface scanning scenario where solar radiation cannot be utilised.

  19. The integration of novel diagnostics techniques for multi-scale monitoring of large civil infrastructures

    Directory of Open Access Journals (Sweden)

    F. Soldovieri

    2008-11-01

    Full Text Available In the recent years, structural monitoring of large infrastructures (buildings, dams, bridges or more generally man-made structures has raised an increased attention due to the growing interest about safety and security issues and risk assessment through early detection. In this framework, aim of the paper is to introduce a new integrated approach which combines two sensing techniques acting on different spatial and temporal scales. The first one is a distributed optic fiber sensor based on the Brillouin scattering phenomenon, which allows a spatially and temporally continuous monitoring of the structure with a "low" spatial resolution (meter. The second technique is based on the use of Ground Penetrating Radar (GPR, which can provide detailed images of the inner status of the structure (with a spatial resolution less then tens centimetres, but does not allow a temporal continuous monitoring. The paper describes the features of these two techniques and provides experimental results concerning preliminary test cases.

  20. 'Take the long way down': Integration of large-scale North Sea wind using HVDC transmission

    International Nuclear Information System (INIS)

    Weigt, Hannes; Jeske, Till; Leuthold, Florian; Hirschhausen, Christian von

    2010-01-01

    We analyze the impact of extensive wind development in Germany for the year 2015, focusing on grid extensions and price signals. We apply the electricity generation and network model ELMOD to compare zonal, nodal, and uniform pricing approaches. In addition to a reference case of network extensions recommended by the German Energy Agency (Dena), we develop a scenario to transmit wind energy to major load centers in Western and Southern Germany via high-voltage direct current (HVDC) connections. From an economic-engineering standpoint, our results indicate that these connections are the most economic way to manage the integration of large-scale offshore wind resources, and that nodal pricing is most likely to determine the locales for future investment to eliminate congestion. We conclude with a description of the model's potential limitations.

  1. The large-scale integration of wind generation: Impacts on price, reliability and dispatchable conventional suppliers

    International Nuclear Information System (INIS)

    MacCormack, John; Hollis, Aidan; Zareipour, Hamidreza; Rosehart, William

    2010-01-01

    This work examines the effects of large-scale integration of wind powered electricity generation in a deregulated energy-only market on loads (in terms of electricity prices and supply reliability) and dispatchable conventional power suppliers. Hourly models of wind generation time series, load and resultant residual demand are created. From these a non-chronological residual demand duration curve is developed that is combined with a probabilistic model of dispatchable conventional generator availability, a model of an energy-only market with a price cap, and a model of generator costs and dispatch behavior. A number of simulations are performed to evaluate the effect on electricity prices, overall reliability of supply, the ability of a dominant supplier acting strategically to profitably withhold supplies, and the fixed cost recovery of dispatchable conventional power suppliers at different levels of wind generation penetration. Medium and long term responses of the market and/or regulator in the long term are discussed.

  2. Impurity engineering of Czochralski silicon used for ultra large-scaled-integrated circuits

    Science.gov (United States)

    Yang, Deren; Chen, Jiahe; Ma, Xiangyang; Que, Duanlin

    2009-01-01

    Impurities in Czochralski silicon (Cz-Si) used for ultra large-scaled-integrated (ULSI) circuits have been believed to deteriorate the performance of devices. In this paper, a review of the recent processes from our investigation on internal gettering in Cz-Si wafers which were doped with nitrogen, germanium and/or high content of carbon is presented. It has been suggested that those impurities enhance oxygen precipitation, and create both denser bulk microdefects and enough denuded zone with the desirable width, which is benefit of the internal gettering of metal contamination. Based on the experimental facts, a potential mechanism of impurity doping on the internal gettering structure is interpreted and, a new concept of 'impurity engineering' for Cz-Si used for ULSI is proposed.

  3. Review of DC System Technologies for Large Scale Integration of Wind Energy Systems with Electricity Grids

    Directory of Open Access Journals (Sweden)

    Sheng Jie Shao

    2010-06-01

    Full Text Available The ever increasing development and availability of power electronic systems is the underpinning technology that enables large scale integration of wind generation plants with the electricity grid. As the size and power capacity of the wind turbine continues to increase, so is the need to place these significantly large structures at off-shore locations. DC grids and associated power transmission technologies provide opportunities for cost reduction and electricity grid impact minimization as the bulk power is concentrated at single point of entry. As a result, planning, optimization and impact can be studied and carefully controlled minimizing the risk of the investment as well as power system stability issues. This paper discusses the key technologies associated with DC grids for offshore wind farm applications.

  4. Impacts of large-scale offshore wind farm integration on power systems through VSC-HVDC

    DEFF Research Database (Denmark)

    Liu, Hongzhi; Chen, Zhe

    2013-01-01

    The potential of offshore wind energy has been commonly recognized and explored globally. Many countries have implemented and planned offshore wind farms to meet their increasing electricity demands and public environmental appeals, especially in Europe. With relatively less space limitation......, an offshore wind farm could have a capacity rating to hundreds of MWs or even GWs that is large enough to compete with conventional power plants. Thus the impacts of a large offshore wind farm on power system operation and security should be thoroughly studied and understood. This paper investigates...... the impacts of integrating a large-scale offshore wind farm into the transmission system of a power grid through VSC-HVDC connection. The concerns are focused on steady-state voltage stability, dynamic voltage stability and transient angle stability. Simulation results based on an exemplary power system...

  5. Integrating Systems Health Management with Adaptive Controls for a Utility-Scale Wind Turbine

    Science.gov (United States)

    Frost, Susan A.; Goebel, Kai; Trinh, Khanh V.; Balas, Mark J.; Frost, Alan M.

    2011-01-01

    Increasing turbine up-time and reducing maintenance costs are key technology drivers for wind turbine operators. Components within wind turbines are subject to considerable stresses due to unpredictable environmental conditions resulting from rapidly changing local dynamics. Systems health management has the aim to assess the state-of-health of components within a wind turbine, to estimate remaining life, and to aid in autonomous decision-making to minimize damage. Advanced adaptive controls can provide the mechanism to enable optimized operations that also provide the enabling technology for Systems Health Management goals. The work reported herein explores the integration of condition monitoring of wind turbine blades with contingency management and adaptive controls. Results are demonstrated using a high fidelity simulator of a utility-scale wind turbine.

  6. A multi-scale integrated analysis of the energy use in Romania, Bulgaria, Poland and Hungary

    International Nuclear Information System (INIS)

    Iorgulescu, Raluca I.; Polimeni, John M.

    2009-01-01

    This paper discusses energy use in the case of four countries, Bulgaria, Poland, Hungary, and Romania, which changed the economic system from command economy to open-market. The analysis provided uses the multi-scale integrated analysis of societal metabolism (MSIASM) approach and contrasts it with the use of the traditional indicators approach (GDP growth rates and energy intensity). These traditional indicators have been widely criticized for being inadequate reflections of how energy policies work. Furthermore, the one-size-fits-all policies that result from analyzing these indicators are inaccurate, particularly for transitional economies. The alternative indicators, economic labor productivity, saturation index of human activity, and exosomatic metabolic rates are used to investigate the four case studies considering the complexity of the transition process

  7. The Role of a Provider-Sponsored Health Plan in Achieving Scale and Integration.

    Science.gov (United States)

    Johnson, Steven P

    2016-01-01

    In pursuit of two primary strategies-to become an integrated delivery network (IDN) on the local level and to achieve additional overall organizational scale to sustain operations-Health First, based in Rockledge, Florida, relies on the success of its provider-sponsored health plan (PSHP) as a critical asset. For Health First, the PSHP serves as an agent for holding and administering financial risk for the health of populations. In addition, we are learning that our PSHP is a critical asset in support of integrating the components of our care delivery system to manage that financial risk effectively, efficiently, and in a manner that creates a unified experience for the customer.Health First is challenged by continuing pressure on reimbursement, as well as by a substantial regulatory burden, as we work to optimize the environments and tools of care and population health management. Even with strong margins and a healthy balance sheet, we simply do not have the resources needed to bring an IDN robustly to life. However, we have discovered that our PSHP can be the vehicle that carries us to additional scale. Many health systems do not own or otherwise have access to a PSHP to hold and manage financial risk. Health First sought and found a not-for-profit health system with complementary goals and a strong brand to partner with, and we now provide private-label health plan products for that system using its strong name while operating the insurance functions under our license and with our capabilities.

  8. Integrated modelling of nitrate loads to coastal waters and land rent applied to catchment scale water management

    DEFF Research Database (Denmark)

    Jacosen, T.; Refsgaard, A.; Jacobsen, Brian H.

    Abstract The EU WFD requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive agricultu...... in comprehensive, integrated modelling tools.......Abstract The EU WFD requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive...... agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling approach applied...

  9. Integrated modelling of nitrate loads to coastal waters and land rent applied to catchment-scale water management

    DEFF Research Database (Denmark)

    Refsgaard, A.; Jacobsen, T.; Jacobsen, Brian H.

    2007-01-01

    The EU Water Framework Directive (WFD) requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by ...... the potential and limitations of comprehensive, integrated modelling tools.  ......The EU Water Framework Directive (WFD) requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized...... by intensive agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling...

  10. Scales

    Science.gov (United States)

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...

  11. Small-scale hybrid plant integrated with municipal energy supply system

    International Nuclear Information System (INIS)

    Bakken, B.H.; Fossum, M.; Belsnes, M.M.

    2001-01-01

    This paper describes a research program started in 2001 to optimize environmental impact and cost of a small-scale hybrid plant based on candidate resources, transportation technologies and conversion efficiency, including integration with existing energy distribution systems. Special attention is given to a novel hybrid energy concept fuelled by municipal solid waste. The commercial interest for the model is expected to be more pronounced in remote communities and villages, including communities subject to growing prosperity. To enable optimization of complex energy distribution systems with multiple energy sources and carriers a flexible and robust methodology must be developed. This will enable energy companies and consultants to carry out comprehensive feasibility studies prior to investment, including technological, economic and environmental aspects. Governmental and municipal bodies will be able to pursue scenario studies involving energy systems and their impact on the environment, and measure the consequences of possible regulation regimes on environmental questions. This paper describes the hybrid concept for conversion of municipal solid waste in terms of energy supply, as well as the methodology for optimizing such integrated energy systems. (author)

  12. Full scale test platform for European TBM systems integration and maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Vála, Ladislav, E-mail: ladislav.vala@cvrez.cz; Reungoat, Mathieu; Vician, Martin

    2016-11-01

    Highlights: • A platform for EU-TBS maintenance and integration tests is described. • Its modular design allows adaptation to non-EU TBSs. • Assembling of the facility will be followed by initial tests in 2016. - Abstract: This article deals with description and current status of a project of a non-nuclear, full size (1:1 scale) test platform dedicated to tests, optimization and validation of integration and maintenance operations for the European TBM systems in the ITER port cell #16. The facility called TBM platform reproduces the ITER port cell #16 and port interspace with all the relevant interfaces and mock-ups of the corresponding main components. Thanks to the modular design of the platform, it is possible to adapt or change completely the interfaces in the future if needed or required according to the updated configuration of TBSs. In the same way, based on customer requirements, it will be possible to adapt the interfaces and piping inside the mock-ups in order to represent also the other, non-EU configurations of TBM systems designed for port cells #02 and #18. Construction of this test platform is realized and funded within the scope of the SUSEN project.

  13. Fabricating a multi-level barrier-integrated microfluidic device using grey-scale photolithography

    International Nuclear Information System (INIS)

    Nam, Yoonkwang; Kim, Minseok; Kim, Taesung

    2013-01-01

    Most polymer-replica-based microfluidic devices are mainly fabricated by using standard soft-lithography technology so that multi-level masters (MLMs) require multiple spin-coatings, mask alignments, exposures, developments, and bakings. In this paper, we describe a simple method for fabricating MLMs for planar microfluidic channels with multi-level barriers (MLBs). A single photomask is necessary for standard photolithography technology to create a polydimethylsiloxane grey-scale photomask (PGSP), which adjusts the total amount of UV absorption in a negative-tone photoresist via a wide range of dye concentrations. Since the PGSP in turn adjusts the degree of cross-linking of the photoresist, this method enables the fabrication of MLMs for an MLB-integrated microfluidic device. Since the PGSP-based soft-lithography technology provides a simple but powerful fabrication method for MLBs in a microfluidic device, we believe that the fabrication method can be widely used for micro total analysis systems that benefit from MLBs. We demonstrate an MLB-integrated microfluidic device that can separate microparticles. (paper)

  14. Highly Uniform Carbon Nanotube Field-Effect Transistors and Medium Scale Integrated Circuits.

    Science.gov (United States)

    Chen, Bingyan; Zhang, Panpan; Ding, Li; Han, Jie; Qiu, Song; Li, Qingwen; Zhang, Zhiyong; Peng, Lian-Mao

    2016-08-10

    Top-gated p-type field-effect transistors (FETs) have been fabricated in batch based on carbon nanotube (CNT) network thin films prepared from CNT solution and present high yield and highly uniform performance with small threshold voltage distribution with standard deviation of 34 mV. According to the property of FETs, various logical and arithmetical gates, shifters, and d-latch circuits were designed and demonstrated with rail-to-rail output. In particular, a 4-bit adder consisting of 140 p-type CNT FETs was demonstrated with higher packing density and lower supply voltage than other published integrated circuits based on CNT films, which indicates that CNT based integrated circuits can reach to medium scale. In addition, a 2-bit multiplier has been realized for the first time. Benefitted from the high uniformity and suitable threshold voltage of CNT FETs, all of the fabricated circuits based on CNT FETs can be driven by a single voltage as small as 2 V.

  15. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks

    Directory of Open Access Journals (Sweden)

    Raja Jurdak

    2008-11-01

    Full Text Available Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.

  16. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks.

    Science.gov (United States)

    Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio

    2008-11-24

    Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.

  17. Large-scale modeling of condition-specific gene regulatory networks by information integration and inference.

    Science.gov (United States)

    Ellwanger, Daniel Christian; Leonhardt, Jörn Florian; Mewes, Hans-Werner

    2014-12-01

    Understanding how regulatory networks globally coordinate the response of a cell to changing conditions, such as perturbations by shifting environments, is an elementary challenge in systems biology which has yet to be met. Genome-wide gene expression measurements are high dimensional as these are reflecting the condition-specific interplay of thousands of cellular components. The integration of prior biological knowledge into the modeling process of systems-wide gene regulation enables the large-scale interpretation of gene expression signals in the context of known regulatory relations. We developed COGERE (http://mips.helmholtz-muenchen.de/cogere), a method for the inference of condition-specific gene regulatory networks in human and mouse. We integrated existing knowledge of regulatory interactions from multiple sources to a comprehensive model of prior information. COGERE infers condition-specific regulation by evaluating the mutual dependency between regulator (transcription factor or miRNA) and target gene expression using prior information. This dependency is scored by the non-parametric, nonlinear correlation coefficient η(2) (eta squared) that is derived by a two-way analysis of variance. We show that COGERE significantly outperforms alternative methods in predicting condition-specific gene regulatory networks on simulated data sets. Furthermore, by inferring the cancer-specific gene regulatory network from the NCI-60 expression study, we demonstrate the utility of COGERE to promote hypothesis-driven clinical research. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Engineering integrated digital circuits with allosteric ribozymes for scaling up molecular computation and diagnostics.

    Science.gov (United States)

    Penchovsky, Robert

    2012-10-19

    Here we describe molecular implementations of integrated digital circuits, including a three-input AND logic gate, a two-input multiplexer, and 1-to-2 decoder using allosteric ribozymes. Furthermore, we demonstrate a multiplexer-decoder circuit. The ribozymes are designed to seek-and-destroy specific RNAs with a certain length by a fully computerized procedure. The algorithm can accurately predict one base substitution that alters the ribozyme's logic function. The ability to sense the length of RNA molecules enables single ribozymes to be used as platforms for multiple interactions. These ribozymes can work as integrated circuits with the functionality of up to five logic gates. The ribozyme design is universal since the allosteric and substrate domains can be altered to sense different RNAs. In addition, the ribozymes can specifically cleave RNA molecules with triplet-repeat expansions observed in genetic disorders such as oculopharyngeal muscular dystrophy. Therefore, the designer ribozymes can be employed for scaling up computing and diagnostic networks in the fields of molecular computing and diagnostics and RNA synthetic biology.

  19. An integrated model to simulate sown area changes for major crops at a global scale

    Institute of Scientific and Technical Information of China (English)

    WU WenBin; YANG Peng; MENG ChaoYing; SHIBASAKI Ryosuke; ZHOU QingBo; TANG HuaJun; SHI Yun

    2008-01-01

    Dynamics of land use systems have attracted much attention from scientists around the world due to their ecological and socio-economic implications. An integrated model to dynamically simulate future changes in sown areas of four major crops (rice, maize, wheat and soybean) on a global scale is presented. To do so, a crop choice model was developed on the basis of Multinomial Logit (Logit) model to model land users' decisions on crop choices among a set of available alternatives with using a crop utility function. A GIS-based Environmental Policy Integrated Climate (EPIC) model was adopted to simulate the crop yields under a given geophysical environment and farming management conditions,while the International Food Policy and Agricultural Simulation (IFPSIM) model was utilized to estimate crop price in the international market. The crop choice model was linked with the GIS-based EPIC model and the IFPSIM model through data exchange. This integrated model was then validated against the FAO statistical data in 2001-2003 and the Moderate Resolution Imaging Spectroradiometer (MODIS)global land cover product in 2001. Both validation approaches indicated reliability of the model for addressing the dynamics in agricultural land use and its capability for long-term scenario analysis. Finally,the model application was designed to run over a time period of 30 a, taking the year 2000 as baseline.The model outcomes can help understand and explain the causes, locations and consequences of land use changes, and provide support for land use planning and policy making.

  20. Integrating weather and geotechnical monitoring data for assessing the stability of large scale surface mining operations

    Science.gov (United States)

    Steiakakis, Chrysanthos; Agioutantis, Zacharias; Apostolou, Evangelia; Papavgeri, Georgia; Tripolitsiotis, Achilles

    2016-01-01

    The geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions. This paper presents an integrated data management system which was developed over a number of years as well as the advantages through a specific application. The presented case study illustrates how the high production slopes of a mine that exceed depths of 100-120 m were successfully mined with an average displacement rate of 10- 20 mm/day, approaching an almost slow to moderate landslide velocity. Monitoring data of the past four years are included in the database and can be analyzed to produce valuable results. Time-series data correlations of movements, precipitation records, etc. are evaluated and presented in this case study. The results can be used to successfully manage mine operations and ensure the safety of the mine and the workforce.

  1. Integrated Evaluation of Cost, Emissions, and Resource Potential for Algal Biofuels at the National Scale

    Energy Technology Data Exchange (ETDEWEB)

    Davis, Ryan; Fishman, Daniel; Frank, Edward D.; Johnson, Michael C.; Jones, Susanne B.; Kinchin, Christopher; Skaggs, Richard; Venteris, Erik R.; Wigmosta, Mark S.

    2014-04-21

    Costs, emissions, and resource availability were modeled for the production of 5 billion gallons yr-1 (5 BGY) of renewable diesel in the United States from Chlorella biomass by hydrothermal liquefaction (HTL). The HTL model utilized data from a continuous 1-L reactor including catalytic hydrothermal gasification of the aqueous phase, and catalytic hydrotreatment of the HTL oil. A biophysical algae growth model coupled with weather and pond simulations predicted biomass productivity from experimental growth parameters, allowing site-by-site and temporal prediction of biomass production. The 5 BGY scale required geographically and climatically distributed sites. Even though screening down to 5 BGY significantly reduced spatial and temporal variability, site-to-site, season-to-season, and inter-annual variations in productivity affected economic and environmental performance. Performance metrics based on annual average or peak productivity were inadequate; temporally and spatially explicit computations allowed more rigorous analysis of these dynamic systems. For example, 3-season operation with a winter shutdown was favored to avoid high greenhouse gas emissions, and economic performance was harmed by underutilized equipment during slow-growth periods. Thus, analysis of algal biofuel pathways must combine spatiotemporal resource assessment, economic analysis, and environmental analysis integrated over many sites when assessing national scale performance.

  2. Integrated analysis of the effects of agricultural management on nitrogen fluxes at landscape scale

    Energy Technology Data Exchange (ETDEWEB)

    Kros, J., E-mail: hans.kros@wur.nl [Alterra, Wageningen University and Research Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Frumau, K.F.A.; Hensen, A. [Energy Research Centre of The Netherlands, P.O. Box 1, 1755 ZG Petten (Netherlands); Vries, W. de [Alterra, Wageningen University and Research Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Environmental Systems Analysis Group, Wageningen University, P.O. Box 47, 6700 AA Wageningen (Netherlands)

    2011-11-15

    The integrated modelling system INITIATOR was applied to a landscape in the northern part of the Netherlands to assess current nitrogen fluxes to air and water and the impact of various agricultural measures on these fluxes, using spatially explicit input data on animal numbers, land use, agricultural management, meteorology and soil. Average model results on NH{sub 3} deposition and N concentrations in surface water appear to be comparable to observations, but the deviation can be large at local scale, despite the use of high resolution data. Evaluated measures include: air scrubbers reducing NH{sub 3} emissions from poultry and pig housing systems, low protein feeding, reduced fertilizer amounts and low-emission stables for cattle. Low protein feeding and restrictive fertilizer application had the largest effect on both N inputs and N losses, resulting in N deposition reductions on Natura 2000 sites of 10% and 12%, respectively. - Highlights: > We model nitrogen fluxes and the impact of agricultural measures in a rural landscape. > Average model results appear to be comparable to observations. > The measures low protein feeding and restrictive fertilizer application had the largest effect. - Effects of agricultural management on N losses to air and water are evaluated at landscape scale combining a model assessment and measurements.

  3. Transition from depressurization to long term cooling in AP600 scaled integral test facilities

    International Nuclear Information System (INIS)

    Bessette, D.E.; Marzo, M. di

    1999-01-01

    A novel light water reactor design called the AP600 has been proposed by the Westinghouse Electric Corporation. In the evaluation of this plant's behavior during a small break loss of coolant accident (LOCA), the crucial transition to low pressure, long-term cooling is marked by the injection of the gravitationally driven flow from the in-containment refueling water storage tank (IRWST). The onset of this injection is characterized by intermittency in the IRWST flow. This happens at a time when the reactor vessel reaches its minimum inventory. Therefore, it is important to understand and scale the behavior of the integral experimental test facilities during this portion of the transient. The explanation is that the periodic liquid drains and refills of the pressurizer are the reason for the intermittent behavior. The momentum balance for the surge line yields the nondimensional parameter controlling this process. Data from one of the three experimental facilities represent the phenomena well at the prototypical scale. The impact of the intermittent IRWST injection on the safe plant operation is assessed and its implications are successfully resolved. The oscillation is found to result from, in effect, excess water in the primary system and it is not of safety significance. (orig.)

  4. Integrating experimental and simulation length and time scales in mechanistic studies of friction

    International Nuclear Information System (INIS)

    Sawyer, W G; Perry, S S; Phillpot, S R; Sinnott, S B

    2008-01-01

    Friction is ubiquitous in all aspects of everyday life and has consequently been under study for centuries. Classical theories of friction have been developed and used to successfully solve numerous tribological problems. However, modern applications that involve advanced materials operating under extreme environments can lead to situations where classical theories of friction are insufficient to describe the physical responses of sliding interfaces. Here, we review integrated experimental and computational studies of atomic-scale friction and wear at solid-solid interfaces across length and time scales. The influence of structural orientation in the case of carbon nanotube bundles, and molecular orientation in the case of polymer films of polytetrafluoroethylene and polyethylene, on friction and wear are discussed. In addition, while friction in solids is generally considered to be athermal, under certain conditions thermally activated friction is observed for polymers, carbon nanotubes and graphite. The conditions under which these transitions occur, and their proposed origins, are discussed. Lastly, a discussion of future directions is presented

  5. Operation Modeling of Power Systems Integrated with Large-Scale New Energy Power Sources

    Directory of Open Access Journals (Sweden)

    Hui Li

    2016-10-01

    Full Text Available In the most current methods of probabilistic power system production simulation, the output characteristics of new energy power generation (NEPG has not been comprehensively considered. In this paper, the power output characteristics of wind power generation and photovoltaic power generation are firstly analyzed based on statistical methods according to their historical operating data. Then the characteristic indexes and the filtering principle of the NEPG historical output scenarios are introduced with the confidence level, and the calculation model of NEPG’s credible capacity is proposed. Based on this, taking the minimum production costs or the best energy-saving and emission-reduction effect as the optimization objective, the power system operation model with large-scale integration of new energy power generation (NEPG is established considering the power balance, the electricity balance and the peak balance. Besides, the constraints of the operating characteristics of different power generation types, the maintenance schedule, the load reservation, the emergency reservation, the water abandonment and the transmitting capacity between different areas are also considered. With the proposed power system operation model, the operation simulations are carried out based on the actual Northwest power grid of China, which resolves the new energy power accommodations considering different system operating conditions. The simulation results well verify the validity of the proposed power system operation model in the accommodation analysis for the power system which is penetrated with large scale NEPG.

  6. Integrated evaluation of cost, emissions, and resource potential for algal biofuels at the national scale.

    Science.gov (United States)

    Davis, Ryan E; Fishman, Daniel B; Frank, Edward D; Johnson, Michael C; Jones, Susanne B; Kinchin, Christopher M; Skaggs, Richard L; Venteris, Erik R; Wigmosta, Mark S

    2014-05-20

    Costs, emissions, and resource availability were modeled for the production of 5 billion gallons yr(-1) (5 BGY) of renewable diesel in the United States from Chlorella biomass by hydrothermal liquefaction (HTL). The HTL model utilized data from a continuous 1-L reactor including catalytic hydrothermal gasification of the aqueous phase, and catalytic hydrotreatment of the HTL oil. A biophysical algae growth model coupled with weather and pond simulations predicted biomass productivity from experimental growth parameters, allowing site-by-site and temporal prediction of biomass production. The 5 BGY scale required geographically and climatically distributed sites. Even though screening down to 5 BGY significantly reduced spatial and temporal variability, site-to-site, season-to-season, and interannual variations in productivity affected economic and environmental performance. Performance metrics based on annual average or peak productivity were inadequate; temporally and spatially explicit computations allowed more rigorous analysis of these dynamic systems. For example, 3-season operation with a winter shutdown was favored to avoid high greenhouse gas emissions, but economic performance was harmed by underutilized equipment during slow-growth periods. Thus, analysis of algal biofuel pathways must combine spatiotemporal resource assessment, economic analysis, and environmental analysis integrated over many sites when assessing national scale performance.

  7. Assessment of small-scale integrated water vapour variability during HOPE

    Science.gov (United States)

    Steinke, S.; Eikenberg, S.; Löhnert, U.; Dick, G.; Klocke, D.; Di Girolamo, P.; Crewell, S.

    2015-03-01

    The spatio-temporal variability of integrated water vapour (IWV) on small scales of less than 10 km and hours is assessed with data from the 2 months of the High Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2) Observational Prototype Experiment (HOPE). The statistical intercomparison of the unique set of observations during HOPE (microwave radiometer (MWR), Global Positioning System (GPS), sun photometer, radiosondes, Raman lidar, infrared and near-infrared Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellites Aqua and Terra) measuring close together reveals a good agreement in terms of random differences (standard deviation ≤1 kg m-2) and correlation coefficient (≥ 0.98). The exception is MODIS, which appears to suffer from insufficient cloud filtering. For a case study during HOPE featuring a typical boundary layer development, the IWV variability in time and space on scales of less than 10 km and less than 1 h is investigated in detail. For this purpose, the measurements are complemented by simulations with the novel ICOsahedral Nonhydrostatic modelling framework (ICON), which for this study has a horizontal resolution of 156 m. These runs show that differences in space of 3-4 km or time of 10-15 min induce IWV variabilities on the order of 0.4 kg m-2. This model finding is confirmed by observed time series from two MWRs approximately 3 km apart with a comparable temporal resolution of a few seconds. Standard deviations of IWV derived from MWR measurements reveal a high variability (> 1 kg m-2) even at very short time scales of a few minutes. These cannot be captured by the temporally lower-resolved instruments and by operational numerical weather prediction models such as COSMO-DE (an application of the Consortium for Small-scale Modelling covering Germany) of Deutscher Wetterdienst, which is included in the comparison. However, for time scales larger than 1 h, a sampling resolution of 15 min is

  8. Scale Expansion of Community Investigations and Integration of the Effects of Abiotic and Biotic Processes on Maintenance of Species Diversity

    Directory of Open Access Journals (Sweden)

    Zhenhong Wang

    2011-01-01

    Full Text Available Information on the maintenance of diversity patterns from regional to local scales is dispersed among academic fields due to the local focus of community ecology. To better understand these patterns, the study of ecological communities needs to be expanded to larger scales and the various processes affecting them need to be integrated using a suitable quantitative method. We determined a range of communities on a flora-subregional scale in Yunnan province, China (383210.02 km2. A series of species pools were delimited from the regional to plot scales. Plant diversity was evaluated and abiotic and biotic processes identified at each pool level. The species pool effect was calculated using an innovative model, and the contribution of these processes to the maintenance of plant species diversity was determined and integrated: climate had the greatest effect at the flora-subregional scale, with historical and evolutionary processes contributing ∼11%; climate and human disturbance had the greatest effect at the local site pool scale; competition exclusion and stress limitation explained strong filtering at the successional stage pool scale; biotic processes contributed more on the local community scale than on the regional scale. Scale expansion combined with the filtering model approach solves the local problem in community ecology.

  9. Scaling issues in multi-criteria evaluation of combinations of measures for integrated river basin management

    Science.gov (United States)

    Dietrich, Jörg

    2016-05-01

    In integrated river basin management, measures for reaching the environmental objectives can be evaluated at different scales, and according to multiple criteria of different nature (e.g. ecological, economic, social). Decision makers, including responsible authorities and stakeholders, follow different interests regarding criteria and scales. With a bottom up approach, the multi criteria assessment could produce a different outcome than with a top down approach. The first assigns more power to the local community, which is a common principle of IWRM. On the other hand, the development of an overall catchment strategy could potentially make use of synergetic effects of the measures, which fulfils the cost efficiency requirement at the basin scale but compromises local interests. Within a joint research project for the 5500 km2 Werra river basin in central Germany, measures have been planned to reach environmental objectives of the European Water Framework directive (WFD) regarding ecological continuity and nutrient loads. The main criteria for the evaluation of the measures were costs of implementation, reduction of nutrients, ecological benefit and social acceptance. The multi-criteria evaluation of the catchment strategies showed compensation between positive and negative performance of criteria within the catchment, which in the end reduced the discriminative power of the different strategies. Furthermore, benefit criteria are partially computed for the whole basin only. Both ecological continuity and nutrient load show upstream-downstream effects in opposite direction. The principles of "polluter pays" and "overall cost efficiency" can be followed for the reduction of nutrient losses when financial compensations between upstream and downstream users are made, similar to concepts of emission trading.

  10. Large-scale offshore wind energy. Cost analysis and integration in the Dutch electricity market

    International Nuclear Information System (INIS)

    De Noord, M.

    1999-02-01

    The results of analysis of the construction and integration costs of large-scale offshore wind energy (OWE) farms in 2010 are presented. The integration of these farms (1 and 5 GW) in the Dutch electricity distribution system have been regarded against the background of a liberalised electricity market. A first step is taken for the determination of costs involved in solving integration problems. Three different types of foundations are examined: the mono-pile, the jacket and a new type of foundation: the concrete caisson pile: all single-turbine-single-support structures. For real offshore applications (>10 km offshore, at sea-depths >20 m), the concrete caisson pile is regarded as the most suitable. The price/power ratios of wind turbines are analysed. It is assumed that in 2010 turbines in the power range of 3-5 MW are available. The main calculations have been conducted for a 3 MW turbine. The main choice in electrical infrastructure is for AC or DC. Calculations show that at distances of 30 km offshore and more, the use of HVDC will result in higher initial costs but lower operating costs. The share of operating and maintenance (O ampersand M) costs in the kWh cost price is approximately 3.3%. To be able to compare the two farms, a base case is derived with a construction time of 10 years for both. The energy yield is calculated for a wind regime offshore of 9.0 m/s annual mean wind speed. Per 3 MW turbine this results in an annual energy production of approximately 12 GWh. The total farm efficiency amounts to 82%, resulting in a total farm capacity factor of 38%. With a required internal rate of return of 15%, the kWh cost price amounts to 0.24 DFl and 0.21 DFl for the 1 GW and 5 GW farms respectively in the base case. The required internal rate of return has a large effect on the kWh cost price, followed by costs of subsystems. O ampersand M costs have little effect on the cost price. Parameter studies show that a small cost reduction of 5% is possible when

  11. Improvement of CMOS VLSI rad tolerance by processing technics

    International Nuclear Information System (INIS)

    Guyomard, D.; Desoutter, I.

    1986-01-01

    The following study concerns the development of integrated circuits for fields requiring only relatively low radiation tolerance levels, and especially for the civil spatial district area. Process modifications constitute our basic study. They have been carried into effects. Our work and main results are reported in this paper. Well known 2.5 and 3 μm CMOS technologies are under our concern. A first set of modifications enables us to double the cumulative dose tolerance of a 4 Kbit SRAM, keeping at the same time the same kind of damage. We obtain memories which tolerate radiation doses as high as 16 KRad(Si). Repetitivity of the results, linked to the quality assurance of this specific circuit, is reported here. A second set of modifications concerns the processing of gate array. In particular, the choice of the silicon substrate type, (epitaxy substrate), is under investigation. On the other hand, a complete study of a test vehicule allows us to accurately measure the rad tolerance of various components of the Cell library [fr

  12. Integrating continental-scale ecological data into university courses: Developing NEON's Online Learning Portal

    Science.gov (United States)

    Wasser, L. A.; Gram, W.; Lunch, C. K.; Petroy, S. B.; Elmendorf, S.

    2013-12-01

    'Big Data' are becoming increasingly common in many fields. The National Ecological Observatory Network (NEON) will be collecting data over the 30 years, using consistent, standardized methods across the United States. Similar efforts are underway in other parts of the globe (e.g. Australia's Terrestrial Ecosystem Research Network, TERN). These freely available new data provide an opportunity for increased understanding of continental- and global scale processes such as changes in vegetation structure and condition, biodiversity and landuse. However, while 'big data' are becoming more accessible and available, integrating big data into the university courses is challenging. New and potentially unfamiliar data types and associated processing methods, required to work with a growing diversity of available data, may warrant time and resources that present a barrier to classroom integration. Analysis of these big datasets may further present a challenge given large file sizes, and uncertainty regarding best methods to properly statistically summarize and analyze results. Finally, teaching resources, in the form of demonstrative illustrations, and other supporting media that might help teach key data concepts, take time to find and more time to develop. Available resources are often spread widely across multi-online spaces. This presentation will overview the development of NEON's collaborative University-focused online education portal. Portal content will include 1) interactive, online multi-media content that explains key concepts related to NEON's data products including collection methods, key metadata to consider and consideration of potential error and uncertainty surrounding data analysis; and 2) packaged 'lab' activities that include supporting data to be used in an ecology, biology or earth science classroom. To facilitate broad use in classrooms, lab activities will take advantage of freely and commonly available processing tools, techniques and scripts. All

  13. An integrated health sector response to violence against women in Malaysia: lessons for supporting scale up

    Directory of Open Access Journals (Sweden)

    Colombini Manuela

    2012-07-01

    Full Text Available Abstract Background Malaysia has been at the forefront of the development and scale up of One-Stop Crisis Centres (OSCC - an integrated health sector model that provides comprehensive care to women and children experiencing physical, emotional and sexual abuse. This study explored the strengths and challenges faced during the scaling up of the OSCC model to two States in Malaysia in order to identify lessons for supporting successful scale-up. Methods In-depth interviews were conducted with health care providers, policy makers and key informants in 7 hospital facilities. This was complemented by a document analysis of hospital records and protocols. Data were coded and analysed using NVivo 7. Results The implementation of the OSCC model differed between hospital settings, with practise being influenced by organisational systems and constraints. Health providers generally tried to offer care to abused women, but they are not fully supported within their facility due to lack of training, time constraints, limited allocated budget, or lack of referral system to external support services. Non-specialised hospitals in both States struggled with a scarcity of specialised staff and limited referral options for abused women. Despite these challenges, even in more resource-constrained settings staff who took the initiative found it was possible to adapt to provide some level of OSCC services, such as referring women to local NGOs or community support groups, or training nurses to offer basic counselling. Conclusions The national implementation of OSCC provides a potentially important source of support for women experiencing violence. Our findings confirm that pilot interventions for health sector responses to gender based violence can be scaled up only when there is a sound health infrastructure in place – in other words a supportive health system. Furthermore, the successful replication of the OSCC model in other similar settings requires that the

  14. Monolithic Ge-on-Si lasers for large-scale electronic-photonic integration

    Science.gov (United States)

    Liu, Jifeng; Kimerling, Lionel C.; Michel, Jurgen

    2012-09-01

    A silicon-based monolithic laser source has long been envisioned as a key enabling component for large-scale electronic-photonic integration in future generations of high-performance computation and communication systems. In this paper we present a comprehensive review on the development of monolithic Ge-on-Si lasers for this application. Starting with a historical review of light emission from the direct gap transition of Ge dating back to the 1960s, we focus on the rapid progress in band-engineered Ge-on-Si lasers in the past five years after a nearly 30-year gap in this research field. Ge has become an interesting candidate for active devices in Si photonics in the past decade due to its pseudo-direct gap behavior and compatibility with Si complementary metal oxide semiconductor (CMOS) processing. In 2007, we proposed combing tensile strain with n-type doping to compensate the energy difference between the direct and indirect band gap of Ge, thereby achieving net optical gain for CMOS-compatible diode lasers. Here we systematically present theoretical modeling, material growth methods, spontaneous emission, optical gain, and lasing under optical and electrical pumping from band-engineered Ge-on-Si, culminated by recently demonstrated electrically pumped Ge-on-Si lasers with >1 mW output in the communication wavelength window of 1500-1700 nm. The broad gain spectrum enables on-chip wavelength division multiplexing. A unique feature of band-engineered pseudo-direct gap Ge light emitters is that the emission intensity increases with temperature, exactly opposite to conventional direct gap semiconductor light-emitting devices. This extraordinary thermal anti-quenching behavior greatly facilitates monolithic integration on Si microchips where temperatures can reach up to 80 °C during operation. The same band-engineering approach can be extended to other pseudo-direct gap semiconductors, allowing us to achieve efficient light emission at wavelengths previously

  15. Using integrated modeling for generating watershed-scale dynamic flood maps for Hurricane Harvey

    Science.gov (United States)

    Saksena, S.; Dey, S.; Merwade, V.; Singhofen, P. J.

    2017-12-01

    Hurricane Harvey, which was categorized as a 1000-year return period event, produced unprecedented rainfall and flooding in Houston. Although the expected rainfall was forecasted much before the event, there was no way to identify which regions were at higher risk of flooding, the magnitude of flooding, and when the impacts of rainfall would be highest. The inability to predict the location, duration, and depth of flooding created uncertainty over evacuation planning and preparation. This catastrophic event highlighted that the conventional approach to managing flood risk using 100-year static flood inundation maps is inadequate because of its inability to predict flood duration and extents for 500-year or 1000-year return period events in real-time. The purpose of this study is to create models that can dynamically predict the impacts of rainfall and subsequent flooding, so that necessary evacuation and rescue efforts can be planned in advance. This study uses a 2D integrated surface water-groundwater model called ICPR (Interconnected Channel and Pond Routing) to simulate both the hydrology and hydrodynamics for Hurricane Harvey. The methodology involves using the NHD stream network to create a 2D model that incorporates rainfall, land use, vadose zone properties and topography to estimate streamflow and generate dynamic flood depths and extents. The results show that dynamic flood mapping captures the flood hydrodynamics more accurately and is able to predict the magnitude, extent and time of occurrence for extreme events such as Hurricane Harvey. Therefore, integrated modeling has the potential to identify regions that are more susceptible to flooding, which is especially useful for large-scale planning and allocation of resources for protection against future flood risk.

  16. Monolithic Ge-on-Si lasers for large-scale electronic–photonic integration

    International Nuclear Information System (INIS)

    Liu, Jifeng; Kimerling, Lionel C; Michel, Jurgen

    2012-01-01

    A silicon-based monolithic laser source has long been envisioned as a key enabling component for large-scale electronic–photonic integration in future generations of high-performance computation and communication systems. In this paper we present a comprehensive review on the development of monolithic Ge-on-Si lasers for this application. Starting with a historical review of light emission from the direct gap transition of Ge dating back to the 1960s, we focus on the rapid progress in band-engineered Ge-on-Si lasers in the past five years after a nearly 30-year gap in this research field. Ge has become an interesting candidate for active devices in Si photonics in the past decade due to its pseudo-direct gap behavior and compatibility with Si complementary metal oxide semiconductor (CMOS) processing. In 2007, we proposed combing tensile strain with n-type doping to compensate the energy difference between the direct and indirect band gap of Ge, thereby achieving net optical gain for CMOS-compatible diode lasers. Here we systematically present theoretical modeling, material growth methods, spontaneous emission, optical gain, and lasing under optical and electrical pumping from band-engineered Ge-on-Si, culminated by recently demonstrated electrically pumped Ge-on-Si lasers with >1 mW output in the communication wavelength window of 1500–1700 nm. The broad gain spectrum enables on-chip wavelength division multiplexing. A unique feature of band-engineered pseudo-direct gap Ge light emitters is that the emission intensity increases with temperature, exactly opposite to conventional direct gap semiconductor light-emitting devices. This extraordinary thermal anti-quenching behavior greatly facilitates monolithic integration on Si microchips where temperatures can reach up to 80 °C during operation. The same band-engineering approach can be extended to other pseudo-direct gap semiconductors, allowing us to achieve efficient light emission at wavelengths previously

  17. Measurement of Galaxy Cluster Integrated Comptonization and Mass Scaling Relations with the South Pole Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Saliwanchik, B. R.; et al.

    2015-01-22

    We describe a method for measuring the integrated Comptonization (Y (SZ)) of clusters of galaxies from measurements of the Sunyaev-Zel'dovich (SZ) effect in multiple frequency bands and use this method to characterize a sample of galaxy clusters detected in the South Pole Telescope (SPT) data. We use a Markov Chain Monte Carlo method to fit a β-model source profile and integrate Y (SZ) within an angular aperture on the sky. In simulated observations of an SPT-like survey that include cosmic microwave background anisotropy, point sources, and atmospheric and instrumental noise at typical SPT-SZ survey levels, we show that we can accurately recover β-model parameters for inputted clusters. We measure Y (SZ) for simulated semi-analytic clusters and find that Y (SZ) is most accurately determined in an angular aperture comparable to the SPT beam size. We demonstrate the utility of this method to measure Y (SZ) and to constrain mass scaling relations using X-ray mass estimates for a sample of 18 galaxy clusters from the SPT-SZ survey. Measuring Y (SZ) within a 0.'75 radius aperture, we find an intrinsic log-normal scatter of 21% ± 11% in Y (SZ) at a fixed mass. Measuring Y (SZ) within a 0.3 Mpc projected radius (equivalent to 0.'75 at the survey median redshift z = 0.6), we find a scatter of 26% ± 9%. Prior to this study, the SPT observable found to have the lowest scatter with mass was cluster detection significance. We demonstrate, from both simulations and SPT observed clusters that Y (SZ) measured within an aperture comparable to the SPT beam size is equivalent, in terms of scatter with cluster mass, to SPT cluster detection significance.

  18. Scale-up and integration of alkaline hydrogen peroxide pretreatment, enzymatic hydrolysis, and ethanolic fermentation.

    Science.gov (United States)

    Banerjee, Goutami; Car, Suzana; Liu, Tongjun; Williams, Daniel L; Meza, Sarynna López; Walton, Jonathan D; Hodge, David B

    2012-04-01

    Alkaline hydrogen peroxide (AHP) has several attractive features as a pretreatment in the lignocellulosic biomass-to-ethanol pipeline. Here, the feasibility of scaling-up the AHP process and integrating it with enzymatic hydrolysis and fermentation was studied. Corn stover (1 kg) was subjected to AHP pretreatment, hydrolyzed enzymatically, and the resulting sugars fermented to ethanol. The AHP pretreatment was performed at 0.125 g H(2) O(2) /g biomass, 22°C, and atmospheric pressure for 48 h with periodic pH readjustment. The enzymatic hydrolysis was performed in the same reactor following pH neutralization of the biomass slurry and without washing. After 48 h, glucose and xylose yields were 75% and 71% of the theoretical maximum. Sterility was maintained during pretreatment and enzymatic hydrolysis without the use of antibiotics. During fermentation using a glucose- and xylose-utilizing strain of Saccharomyces cerevisiae, all of the Glc and 67% of the Xyl were consumed in 120 h. The final ethanol titer was 13.7 g/L. Treatment of the enzymatic hydrolysate with activated carbon prior to fermentation had little effect on Glc fermentation but markedly improved utilization of Xyl, presumably due to the removal of soluble aromatic inhibitors. The results indicate that AHP is readily scalable and can be integrated with enzyme hydrolysis and fermentation. Compared to other leading pretreatments for lignocellulosic biomass, AHP has potential advantages with regard to capital costs, process simplicity, feedstock handling, and compatibility with enzymatic deconstruction and fermentation. Biotechnol. Bioeng. 2012; 109:922-931. © 2011 Wiley Periodicals, Inc. Copyright © 2011 Wiley Periodicals, Inc.

  19. Collaborative Catchment-Scale Water Quality Management using Integrated Wireless Sensor Networks

    Science.gov (United States)

    Zia, Huma; Harris, Nick; Merrett, Geoff

    2013-04-01

    Electronics and Computer Science, University of Southampton, United Kingdom Summary The challenge of improving water quality (WQ) is a growing global concern [1]. Poor WQ is mainly attributed to poor water management and outdated agricultural activities. We propose that collaborative sensor networks spread across an entire catchment can allow cooperation among individual activities for integrated WQ monitoring and management. We show that sharing information on critical parameters among networks of water bodies and farms can enable identification and quantification of the contaminant sources, enabling better decision making for agricultural practices and thereby reducing contaminants fluxes. Motivation and results Nutrient losses from land to water have accelerated due to agricultural and urban pursuits [2]. In many cases, the application of fertiliser can be reduced by 30-50% without any loss of yield [3]. Thus information about nutrient levels and trends around the farm can improve agricultural practices and thereby reduce water contamination. The use of sensor networks for monitoring WQ in a catchment is in its infancy, but more applications are being tested [4]. However, these are focussed on local requirements and are mostly limited to water bodies. They have yet to explore the use of this technology for catchment-scale monitoring and management decisions, in an autonomous and dynamic manner. For effective and integrated WQ management, we propose a system that utilises local monitoring networks across a catchment, with provision for collaborative information sharing. This system of networks shares information about critical events, such as rain or flooding. Higher-level applications make use of this information to inform decisions about nutrient management, improving the quality of monitoring through the provision of richer datasets of catchment information to local networks. In the full paper, we present example scenarios and analyse how the benefits of

  20. Large Scale Integration of Renewable Power Sources into the Vietnamese Power System

    Science.gov (United States)

    Kies, Alexander; Schyska, Bruno; Thanh Viet, Dinh; von Bremen, Lueder; Heinemann, Detlev; Schramm, Stefan

    2017-04-01

    The Vietnamese Power system is expected to expand considerably in upcoming decades. Power capacities installed are projected to grow from 39 GW in 2015 to 129.5 GW by 2030. Installed wind power capacities are expected to grow to 6 GW (0.8 GW 2015) and solar power capacities to 12 GW (0.85 GW 2015). This goes hand in hand with an increase of the renewable penetration in the power mix from 1.3% from wind and photovoltaics (PV) in 2015 to 5.4% by 2030. The overall potential for wind power in Vietnam is estimated to be around 24 GW. Moreover, the up-scaling of renewable energy sources was formulated as one of the priorized targets of the Vietnamese government in the National Power Development Plan VII. In this work, we investigate the transition of the Vietnamese power system towards high shares of renewables. For this purpose, we jointly optimise the expansion of renewable generation facilities for wind and PV, and the transmission grid within renewable build-up pathways until 2030 and beyond. To simulate the Vietnamese power system and its generation from renewable sources, we use highly spatially and temporally resolved historical weather and load data and the open source modelling toolbox Python for Power System Analysis (PyPSA). We show that the highest potential of renewable generation for wind and PV is observed in southern Vietnam and discuss the resulting need for transmission grid extensions in dependency of the optimal pathway. Furthermore, we show that the smoothing effect of wind power has several considerable beneficial effects and that the Vietnamese hydro power potential can be efficiently used to provide balancing opportunities. This work is part of the R&D Project "Analysis of the Large Scale Integration of Renewable Power into the Future Vietnamese Power System" (GIZ, 2016-2018).

  1. An SEU analysis approach for error propagation in digital VLSI CMOS ASICs

    International Nuclear Information System (INIS)

    Baze, M.P.; Bartholet, W.G.; Dao, T.A.; Buchner, S.

    1995-01-01

    A critical issue in the development of ASIC designs is the ability to achieve first pass fabrication success. Unsuccessful fabrication runs have serious impact on ASIC costs and schedules. The ability to predict an ASICs radiation response prior to fabrication is therefore a key issue when designing ASICs for military and aerospace systems. This paper describes an analysis approach for calculating static bit error propagation in synchronous VLSI CMOS circuits developed as an aid for predicting the SEU response of ASIC's. The technique is intended for eventual application as an ASIC development simulation tool which can be used by circuit design engineers for performance evaluation during the pre-fabrication design process in much the same way that logic and timing simulators are used

  2. Analog VLSI Models of Range-Tuned Neurons in the Bat Echolocation System

    Directory of Open Access Journals (Sweden)

    Horiuchi Timothy

    2003-01-01

    Full Text Available Bat echolocation is a fascinating topic of research for both neuroscientists and engineers, due to the complex and extremely time-constrained nature of the problem and its potential for application to engineered systems. In the bat's brainstem and midbrain exist neural circuits that are sensitive to the specific difference in time between the outgoing sonar vocalization and the returning echo. While some of the details of the neural mechanisms are known to be species-specific, a basic model of reafference-triggered, postinhibitory rebound timing is reasonably well supported by available data. We have designed low-power, analog VLSI circuits to mimic this mechanism and have demonstrated range-dependent outputs for use in a real-time sonar system. These circuits are being used to implement range-dependent vocalization amplitude, vocalization rate, and closest target isolation.

  3. Real time track finding in a drift chamber with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-01-01

    In a test setup, a hardware neural network determined track parameters of charged particles traversing a drift chamber. Voltages proportional to the drift times in 6 cells of the 3-layer chamber were inputs to the Intel ETANN neural network chip which had been trained to give the slope and intercept of tracks. We compare network track parameters to those obtained from off-line track fits. To our knowledge this is the first on-line application of a VLSI neural network to a high energy physics detector. This test explored the potential of the chip and the practical problems of using it in a real world setting. We compare the chip performance to a neural network simulation on a conventional computer. We discuss possible applications of the chip in high energy physics detector triggers. (orig.)

  4. Ant System-Corner Insertion Sequence: An Efficient VLSI Hard Module Placer

    Directory of Open Access Journals (Sweden)

    HOO, C.-S.

    2013-02-01

    Full Text Available Placement is important in VLSI physical design as it determines the time-to-market and chip's reliability. In this paper, a new floorplan representation which couples with Ant System, namely Corner Insertion Sequence (CIS is proposed. Though CIS's search complexity is smaller than the state-of-the-art representation Corner Sequence (CS, CIS adopts a preset boundary on the placement and hence, leading to search bound similar to CS. This enables the previous unutilized corner edges to become viable. Also, the redundancy of CS representation is eliminated in CIS leads to a lower search complexity of CIS. Experimental results on Microelectronics Center of North Carolina (MCNC hard block benchmark circuits show that the proposed algorithm performs comparably in terms of area yet at least two times faster than CS.

  5. A parallel VLSI architecture for a digital filter of arbitrary length using Fermat number transforms

    Science.gov (United States)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1982-01-01

    A parallel architecture for computation of the linear convolution of two sequences of arbitrary lengths using the Fermat number transform (FNT) is described. In particular a pipeline structure is designed to compute a 128-point FNT. In this FNT, only additions and bit rotations are required. A standard barrel shifter circuit is modified so that it performs the required bit rotation operation. The overlap-save method is generalized for the FNT to compute a linear convolution of arbitrary length. A parallel architecture is developed to realize this type of overlap-save method using one FNT and several inverse FNTs of 128 points. The generalized overlap save method alleviates the usual dynamic range limitation in FNTs of long transform lengths. Its architecture is regular, simple, and expandable, and therefore naturally suitable for VLSI implementation.

  6. Integrated calibration of a 3D attitude sensor in large-scale metrology

    International Nuclear Information System (INIS)

    Gao, Yang; Lin, Jiarui; Yang, Linghui; Zhu, Jigui; Muelaner, Jody; Keogh, Patrick

    2017-01-01

    A novel calibration method is presented for a multi-sensor fusion system in large-scale metrology, which improves the calibration efficiency and reliability. The attitude sensor is composed of a pinhole prism, a converging lens, an area-array camera and a biaxial inclinometer. A mathematical model is established to determine its 3D attitude relative to a cooperative total station by using two vector observations from the imaging system and the inclinometer. There are two areas of unknown parameters in the measurement model that should be calibrated: the intrinsic parameters of the imaging model, and the transformation matrix between the camera and the inclinometer. An integrated calibration method using a three-axis rotary table and a total station is proposed. A single mounting position of the attitude sensor on the rotary table is sufficient to solve for all parameters of the measurement model. A correction technique for the reference laser beam of the total station is also presented to remove the need for accurate positioning of the sensor on the rotary table. Experimental verification has proved the practicality and accuracy of this calibration method. Results show that the mean deviations of attitude angles using the proposed method are less than 0.01°. (paper)

  7. Fractional statistics and quantum scaling properties of the integrable Penson-Kolb-Hubbard chain

    Science.gov (United States)

    Vitoriano, Carlindo; Coutinho-Filho, M. D.

    2010-09-01

    We investigate the ground-state and low-temperature properties of the integrable version of the Penson-Kolb-Hubbard chain. The model obeys fractional statistical properties, which give rise to fractional elementary excitations and manifest differently in the four regions of the phase diagram U/t versus n , where U is the Coulomb coupling, t is the correlated hopping amplitude, and n is the particle density. In fact, we can find local pair formation, fractionalization of the average occupation number per orbital k , or U - and n -dependent average electric charge per orbital k . We also study the scaling behavior near the U -driven quantum phase transitions and characterize their universality classes. Finally, it is shown that in the regime of parameters where local pair formation is energetically more favorable, the ground state exhibits power-law superconductivity; we also stress that above half filling the pair-hopping term stabilizes local Cooper pairs in the repulsive- U regime for U

  8. Integrating large-scale data and RNA technology to protect crops from fungal pathogens

    Directory of Open Access Journals (Sweden)

    Ian Joseph Girard

    2016-05-01

    Full Text Available With a rapidly growing human population it is expected that plant science researchers and the agricultural community will need to increase food productivity using less arable land. This challenge is complicated by fungal pathogens and diseases, many of which can severely impact crop yield. Current measures to control fungal pathogens are either ineffective or have adverse effects on the agricultural enterprise. Thus, developing new strategies through research innovation to protect plants from pathogenic fungi is necessary to overcome these hurdles. RNA sequencing technologies are increasing our understanding of the underlying genes and gene regulatory networks mediating disease outcomes. The application of invigorating next generation sequencing strategies to study plant-pathogen interactions has and will provide unprecedented insight into the complex patterns of gene activity responsible for crop protection. However, questions remain about how biological processes in both the pathogen and the host are specified in space directly at the site of infection and over the infection period. The integration of cutting edge molecular and computational tools will provide plant scientists with the arsenal required to identify genes and molecules that play a role in plant protection. Large scale RNA sequence data can then be used to protect plants by targeting genes essential for pathogen viability in the production of stably transformed lines expressing RNA interference molecules, or through foliar applications of double stranded RNA.

  9. Ensembl Genomes: an integrative resource for genome-scale data from non-vertebrate species.

    Science.gov (United States)

    Kersey, Paul J; Staines, Daniel M; Lawson, Daniel; Kulesha, Eugene; Derwent, Paul; Humphrey, Jay C; Hughes, Daniel S T; Keenan, Stephan; Kerhornou, Arnaud; Koscielny, Gautier; Langridge, Nicholas; McDowall, Mark D; Megy, Karine; Maheswari, Uma; Nuhn, Michael; Paulini, Michael; Pedro, Helder; Toneva, Iliana; Wilson, Derek; Yates, Andrew; Birney, Ewan

    2012-01-01

    Ensembl Genomes (http://www.ensemblgenomes.org) is an integrative resource for genome-scale data from non-vertebrate species. The project exploits and extends technology (for genome annotation, analysis and dissemination) developed in the context of the (vertebrate-focused) Ensembl project and provides a complementary set of resources for non-vertebrate species through a consistent set of programmatic and interactive interfaces. These provide access to data including reference sequence, gene models, transcriptional data, polymorphisms and comparative analysis. Since its launch in 2009, Ensembl Genomes has undergone rapid expansion, with the goal of providing coverage of all major experimental organisms, and additionally including taxonomic reference points to provide the evolutionary context in which genes can be understood. Against the backdrop of a continuing increase in genome sequencing activities in all parts of the tree of life, we seek to work, wherever possible, with the communities actively generating and using data, and are participants in a growing range of collaborations involved in the annotation and analysis of genomes.

  10. Vedic division methodology for high-speed very large scale integration applications

    Directory of Open Access Journals (Sweden)

    Prabir Saha

    2014-02-01

    Full Text Available Transistor level implementation of division methodology using ancient Vedic mathematics is reported in this Letter. The potentiality of the ‘Dhvajanka (on top of the flag’ formula was adopted from Vedic mathematics to implement such type of divider for practical very large scale integration applications. The division methodology was implemented through half of the divisor bit instead of the actual divisor, subtraction and little multiplication. Propagation delay and dynamic power consumption of divider circuitry were minimised significantly by stage reduction through Vedic division methodology. The functionality of the division algorithm was checked and performance parameters like propagation delay and dynamic power consumption were calculated through spice spectre with 90 nm complementary metal oxide semiconductor technology. The propagation delay of the resulted (32 ÷ 16 bit divider circuitry was only ∼300 ns and consumed ∼32.5 mW power for a layout area of 17.39 mm^2. Combination of Boolean arithmetic along with ancient Vedic mathematics, substantial amount of iterations were reduced resulted as ∼47, ∼38, 34% reduction in delay and ∼34, ∼21, ∼18% reduction in power were investigated compared with the mostly used (e.g. digit-recurrence, Newton–Raphson, Goldschmidt architectures.

  11. Integrated biodosimetry in large scale radiological events. Opportunities for civil military co-operation

    International Nuclear Information System (INIS)

    Port, M.; Eder, S.F.; Lamkowski, A.; Majewski, M.; Abend, M.

    2016-01-01

    Radiological events like large scale radiological or nuclear accidents, terroristic attacks with radionuclide dispersal devices require rapid and precise medical classification (''triage'') and medical management of a large number of patients. Estimates on the absorbed dose and in particular predictions of the radiation induced health effects are mandatory for optimized allocation of limited medical resources and initiation of patient centred treatment. Among the German Armed Forces Medical Services the Bundeswehr Institute of Radiobiology offers a wide range of tools for the purpose of medical management to cope with different scenarios. The forward deployable mobile Medical Task Force has access to state of the art methodologies summarized into approaches such as physical dosimetry (including mobile gammaspectroscopy), clinical ''dosimetry'' (prodromi, H-Modul) and different means of biological dosimetry (e.g. dicentrics, high throughput gene expression techniques, gamma-H2AX). The integration of these different approaches enables trained physicians of the Medical Task Force to assess individual health injuries as well as prognostic evaluation, considering modern treatment options. To enhance the capacity of single institutions, networking has been recognized as an important emergency response strategy. The capabilities of physical, biological and clinical ''dosimetry'' approaches spanning from low up to high radiation exposures will be discussed. Furthermore civil military opportunities for combined efforts will be demonstrated.

  12. NEON's Mobile Deployment Platform: A research tool for integrating ecological processes across scales

    Science.gov (United States)

    Sanclements, M.

    2016-12-01

    Here we provide an update on construction of the five NEON Mobile Deployment Platforms (MDPs) as well as a description of the infrastructure and sensors available to researchers in the near future. Additionally, we include information (i.e. timelines and procedures) on requesting MDPs for PI led projects. The MDPs will provide the means to observe stochastic or spatially important events, gradients, or quantities that cannot be reliably observed using fixed location sampling (e.g. fires and floods). Due to the transient temporal and spatial nature of such events, the MDPs are designed to accommodate rapid deployment for time periods up to 1 year. Broadly, the MDPs are comprised of infrastructure and instrumentation capable of functioning individually or in conjunction with one another to support observations of ecological change, as well as education, training and outreach. More specifically, the MDPs include the capability to make tower based measures of ecosystem exchange, radiation, and precipitation in conjunction with baseline soils data such as CO2 flux, and soil temperature and moisture. An aquatics module is also available with the MDP to facilitate research integrating terrestrial and aquatic processes. Ultimately, the NEON MDPs provides a tool for linking PI led research to the continental scale data sets collected by NEON.

  13. Reframed Genome-Scale Metabolic Model to Facilitate Genetic Design and Integration with Expression Data.

    Science.gov (United States)

    Gu, Deqing; Jian, Xingxing; Zhang, Cheng; Hua, Qiang

    2017-01-01

    Genome-scale metabolic network models (GEMs) have played important roles in the design of genetically engineered strains and helped biologists to decipher metabolism. However, due to the complex gene-reaction relationships that exist in model systems, most algorithms have limited capabilities with respect to directly predicting accurate genetic design for metabolic engineering. In particular, methods that predict reaction knockout strategies leading to overproduction are often impractical in terms of gene manipulations. Recently, we proposed a method named logical transformation of model (LTM) to simplify the gene-reaction associations by introducing intermediate pseudo reactions, which makes it possible to generate genetic design. Here, we propose an alternative method to relieve researchers from deciphering complex gene-reactions by adding pseudo gene controlling reactions. In comparison to LTM, this new method introduces fewer pseudo reactions and generates a much smaller model system named as gModel. We showed that gModel allows two seldom reported applications: identification of minimal genomes and design of minimal cell factories within a modified OptKnock framework. In addition, gModel could be used to integrate expression data directly and improve the performance of the E-Fmin method for predicting fluxes. In conclusion, the model transformation procedure will facilitate genetic research based on GEMs, extending their applications.

  14. Identifying gene-environment interactions in schizophrenia: contemporary challenges for integrated, large-scale investigations.

    Science.gov (United States)

    van Os, Jim; Rutten, Bart P; Myin-Germeys, Inez; Delespaul, Philippe; Viechtbauer, Wolfgang; van Zelst, Catherine; Bruggeman, Richard; Reininghaus, Ulrich; Morgan, Craig; Murray, Robin M; Di Forti, Marta; McGuire, Philip; Valmaggia, Lucia R; Kempton, Matthew J; Gayer-Anderson, Charlotte; Hubbard, Kathryn; Beards, Stephanie; Stilo, Simona A; Onyejiaka, Adanna; Bourque, Francois; Modinos, Gemma; Tognin, Stefania; Calem, Maria; O'Donovan, Michael C; Owen, Michael J; Holmans, Peter; Williams, Nigel; Craddock, Nicholas; Richards, Alexander; Humphreys, Isla; Meyer-Lindenberg, Andreas; Leweke, F Markus; Tost, Heike; Akdeniz, Ceren; Rohleder, Cathrin; Bumb, J Malte; Schwarz, Emanuel; Alptekin, Köksal; Üçok, Alp; Saka, Meram Can; Atbaşoğlu, E Cem; Gülöksüz, Sinan; Gumus-Akay, Guvem; Cihan, Burçin; Karadağ, Hasan; Soygür, Haldan; Cankurtaran, Eylem Şahin; Ulusoy, Semra; Akdede, Berna; Binbay, Tolga; Ayer, Ahmet; Noyan, Handan; Karadayı, Gülşah; Akturan, Elçin; Ulaş, Halis; Arango, Celso; Parellada, Mara; Bernardo, Miguel; Sanjuán, Julio; Bobes, Julio; Arrojo, Manuel; Santos, Jose Luis; Cuadrado, Pedro; Rodríguez Solano, José Juan; Carracedo, Angel; García Bernardo, Enrique; Roldán, Laura; López, Gonzalo; Cabrera, Bibiana; Cruz, Sabrina; Díaz Mesa, Eva Ma; Pouso, María; Jiménez, Estela; Sánchez, Teresa; Rapado, Marta; González, Emiliano; Martínez, Covadonga; Sánchez, Emilio; Olmeda, Ma Soledad; de Haan, Lieuwe; Velthorst, Eva; van der Gaag, Mark; Selten, Jean-Paul; van Dam, Daniella; van der Ven, Elsje; van der Meer, Floor; Messchaert, Elles; Kraan, Tamar; Burger, Nadine; Leboyer, Marion; Szoke, Andrei; Schürhoff, Franck; Llorca, Pierre-Michel; Jamain, Stéphane; Tortelli, Andrea; Frijda, Flora; Vilain, Jeanne; Galliot, Anne-Marie; Baudin, Grégoire; Ferchiou, Aziz; Richard, Jean-Romain; Bulzacka, Ewa; Charpeaud, Thomas; Tronche, Anne-Marie; De Hert, Marc; van Winkel, Ruud; Decoster, Jeroen; Derom, Catherine; Thiery, Evert; Stefanis, Nikos C; Sachs, Gabriele; Aschauer, Harald; Lasser, Iris; Winklbaur, Bernadette; Schlögelhofer, Monika; Riecher-Rössler, Anita; Borgwardt, Stefan; Walter, Anna; Harrisberger, Fabienne; Smieskova, Renata; Rapp, Charlotte; Ittig, Sarah; Soguel-dit-Piquard, Fabienne; Studerus, Erich; Klosterkötter, Joachim; Ruhrmann, Stephan; Paruch, Julia; Julkowski, Dominika; Hilboll, Desiree; Sham, Pak C; Cherny, Stacey S; Chen, Eric Y H; Campbell, Desmond D; Li, Miaoxin; Romeo-Casabona, Carlos María; Emaldi Cirión, Aitziber; Urruela Mora, Asier; Jones, Peter; Kirkbride, James; Cannon, Mary; Rujescu, Dan; Tarricone, Ilaria; Berardi, Domenico; Bonora, Elena; Seri, Marco; Marcacci, Thomas; Chiri, Luigi; Chierzi, Federico; Storbini, Viviana; Braca, Mauro; Minenna, Maria Gabriella; Donegani, Ivonne; Fioritti, Angelo; La Barbera, Daniele; La Cascia, Caterina Erika; Mulè, Alice; Sideli, Lucia; Sartorio, Rachele; Ferraro, Laura; Tripoli, Giada; Seminerio, Fabio; Marinaro, Anna Maria; McGorry, Patrick; Nelson, Barnaby; Amminger, G Paul; Pantelis, Christos; Menezes, Paulo R; Del-Ben, Cristina M; Gallo Tenan, Silvia H; Shuhama, Rosana; Ruggeri, Mirella; Tosato, Sarah; Lasalvia, Antonio; Bonetto, Chiara; Ira, Elisa; Nordentoft, Merete; Krebs, Marie-Odile; Barrantes-Vidal, Neus; Cristóbal, Paula; Kwapil, Thomas R; Brietzke, Elisa; Bressan, Rodrigo A; Gadelha, Ary; Maric, Nadja P; Andric, Sanja; Mihaljevic, Marina; Mirjanic, Tijana

    2014-07-01

    Recent years have seen considerable progress in epidemiological and molecular genetic research into environmental and genetic factors in schizophrenia, but methodological uncertainties remain with regard to validating environmental exposures, and the population risk conferred by individual molecular genetic variants is small. There are now also a limited number of studies that have investigated molecular genetic candidate gene-environment interactions (G × E), however, so far, thorough replication of findings is rare and G × E research still faces several conceptual and methodological challenges. In this article, we aim to review these recent developments and illustrate how integrated, large-scale investigations may overcome contemporary challenges in G × E research, drawing on the example of a large, international, multi-center study into the identification and translational application of G × E in schizophrenia. While such investigations are now well underway, new challenges emerge for G × E research from late-breaking evidence that genetic variation and environmental exposures are, to a significant degree, shared across a range of psychiatric disorders, with potential overlap in phenotype. © The Author 2014. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  15. Formation of a Community of Practice in the Watershed Scale, with Integrated Local Environmental Knowledge

    Directory of Open Access Journals (Sweden)

    Kenji Kitamura

    2018-02-01

    Full Text Available Rural communities around the world face formidable problems such as resource depletion, environmental degradation and economic decline. While the term ‘community’ is often used without clear definition or context, it can be viewed as a group of people emerging through social interaction. Through a series of collaborative action toward a shared goal, a community of practice can be formed. This paper proposes a hypothetical framework of integrated local environmental knowledge (ILEK, and applies it to analyze the processes of collaborative actions in the case of the Nishibetsu Watershed in Hokkaido, Japan. The case study identified several phases of actions, all initiated by a group of local residents on a grassroots and voluntary basis. These local resident-initiated collaborative actions had a particular confluence of elements to facilitate gradual strengthening of formal and informal institutions in the watershed scale beyond jurisdictional boundaries, making this a worthy case to study. The local residents used diverse types of knowledge, including livelihood-based technologies and skills of working as a group and with local governments, for establishing and strengthening various institutions for collaborative actions, with such knowledge being used in the manner of tools in a box of bricolage for community formation.

  16. GMATA: An Integrated Software Package for Genome-Scale SSR Mining, Marker Development and Viewing.

    Science.gov (United States)

    Wang, Xuewen; Wang, Le

    2016-01-01

    Simple sequence repeats (SSRs), also referred to as microsatellites, are highly variable tandem DNAs that are widely used as genetic markers. The increasing availability of whole-genome and transcript sequences provides information resources for SSR marker development. However, efficient software is required to efficiently identify and display SSR information along with other gene features at a genome scale. We developed novel software package Genome-wide Microsatellite Analyzing Tool Package (GMATA) integrating SSR mining, statistical analysis and plotting, marker design, polymorphism screening and marker transferability, and enabled simultaneously display SSR markers with other genome features. GMATA applies novel strategies for SSR analysis and primer design in large genomes, which allows GMATA to perform faster calculation and provides more accurate results than existing tools. Our package is also capable of processing DNA sequences of any size on a standard computer. GMATA is user friendly, only requires mouse clicks or types inputs on the command line, and is executable in multiple computing platforms. We demonstrated the application of GMATA in plants genomes and reveal a novel distribution pattern of SSRs in 15 grass genomes. The most abundant motifs are dimer GA/TC, the A/T monomer and the GCG/CGC trimer, rather than the rich G/C content in DNA sequence. We also revealed that SSR count is a linear to the chromosome length in fully assembled grass genomes. GMATA represents a powerful application tool that facilitates genomic sequence analyses. GAMTA is freely available at http://sourceforge.net/projects/gmata/?source=navbar.

  17. Integrated modelling of anthropogenic land-use and land-cover change on the global scale

    Science.gov (United States)

    Schaldach, R.; Koch, J.; Alcamo, J.

    2009-04-01

    In many cases land-use activities go hand in hand with substantial modifications of the physical and biological cover of the Earth's surface, resulting in direct effects on energy and matter fluxes between terrestrial ecosystems and the atmosphere. For instance, the conversion of forest to cropland is changing climate relevant surface parameters (e.g. albedo) as well as evapotranspiration processes and carbon flows. In turn, human land-use decisions are also influenced by environmental processes. Changing temperature and precipitation patterns for example are important determinants for location and intensity of agriculture. Due to these close linkages, processes of land-use and related land-cover change should be considered as important components in the construction of Earth System models. A major challenge in modelling land-use change on the global scale is the integration of socio-economic aspects and human decision making with environmental processes. One of the few global approaches that integrates functional components to represent both anthropogenic and environmental aspects of land-use change, is the LandSHIFT model. It simulates the spatial and temporal dynamics of the human land-use activities settlement, cultivation of food crops and grazing management, which compete for the available land resources. The rational of the model is to regionalize the demands for area intensive commodities (e.g. crop production) and services (e.g. space for housing) from the country-level to a global grid with the spatial resolution of 5 arc-minutes. The modelled land-use decisions within the agricultural sector are influenced by changing climate and the resulting effects on biomass productivity. Currently, this causal chain is modelled by integrating results from the process-based vegetation model LPJmL model for changing crop yields and net primary productivity of grazing land. Model output of LandSHIFT is a time series of grid maps with land-use/land-cover information

  18. A novel configurable VLSI architecture design of window-based image processing method

    Science.gov (United States)

    Zhao, Hui; Sang, Hongshi; Shen, Xubang

    2018-03-01

    Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure

  19. Techno-economic feasibility study of the integration of a commercial small-scale ORC in a real case study

    International Nuclear Information System (INIS)

    Cavazzini, G.; Dal Toso, P.

    2015-01-01

    Highlights: • The integration of a small-scale commercial ORC in a real case study was analyzed. • The possibility of recovering the waste heat produced by an ICE was considered. • A semi-empirical steady-state model of the commercial small scale ORC was created. • Both direct and indirect costs was considered in the business model. • The ORC integration was not economically feasible due to increased indirect costs. - Abstract: The ORC certainly represents a promising solution for recovering low-grade waste heat in industries. However, the efficiency of commercial small-scale ORC solutions is still too low in comparison with the high initial costs of the machine and the lack of simulation models specifically developed for commercial ORC systems prevents industries from defining an accurate business model to correctly evaluate the ORC integration in real industrial processes. This paper presents a techno-economic feasibility analysis of the integration of a small-scale commercial ORC in a real case study, represented by a highly-efficient industrial distillery. The integration is aimed at maximizing the electricity auto-production by exploiting the heat produced by an internal combustion engine, already partially recovered in internal thermal processes. To analyze the influence of the ORC integration on the industrial processes, a semi-empirical steady-state model of the commercial small scale ORC was created. The model made it possible to simulate the performance of the commercial ORC within a hypothetical scenario involving the use of the heat from the cooling water and from the exhaust gases of the internal combustion engine. A detailed thermo-dynamic analysis has been carried out to study the effects of the ORC integration on the plant’s energy system with particular focus on the two processes directly affected by ORC integration, namely vapor production and the drying process of the grape marc. The analysis highlighted the great importance in the

  20. Effect of Integrating Hydrologic Scaling Concepts on Students Learning and Decision Making Experiences

    Science.gov (United States)

    Najm, Majdi R. Abou; Mohtar, Rabi H.; Cherkauer, Keith A.; French, Brian F.

    2010-01-01

    Proper understanding of scaling and large-scale hydrologic processes is often not explicitly incorporated in the teaching curriculum. This makes it difficult for students to connect the effect of small scale processes and properties (like soil texture and structure, aggregation, shrinkage, and cracking) on large scale hydrologic responses (like…

  1. Small-scale integrated demonstration of high-level radioactive waste processing and vitrification using actual SRP waste

    International Nuclear Information System (INIS)

    Woolsey, G.B.; Baumgarten, P.K.; Eibling, R.E.; Ferguson, R.B.

    1981-01-01

    A small-scale pilot plant for chemical processing and vitrification of actual high-level waste has been constructed at the Savannah River Laboratory (SRL). This fully integrated facility has been constructed in six shielded cells and has eight major unit operations. Equipment performance and processing characteristics of the unit operations are reported

  2. Integrating chemical fate and population-level effect models of pesticides: the importance of capturing the right scales

    NARCIS (Netherlands)

    Focks, A.; Horst, ter M.M.S.; Berg, van den E.; Baveco, H.; Brink, van den P.J.

    2014-01-01

    Any attempt to introduce more ecological realism into ecological risk assessment of chemicals faces the major challenge of integrating different aspects of the chemicals and species of concern, for example, spatial scales of emissions, chemical exposure patterns in space and time, and population

  3. A data-model integration approach toward improved understanding on wetland functions and hydrological benefits at the catchment scale

    Science.gov (United States)

    Yeo, I. Y.; Lang, M.; Lee, S.; Huang, C.; Jin, H.; McCarty, G.; Sadeghi, A.

    2017-12-01

    The wetland ecosystem plays crucial roles in improving hydrological function and ecological integrity for the downstream water and the surrounding landscape. However, changing behaviours and functioning of wetland ecosystems are poorly understood and extremely difficult to characterize. Improved understanding on hydrological behaviours of wetlands, considering their interaction with surrounding landscapes and impacts on downstream waters, is an essential first step toward closing the knowledge gap. We present an integrated wetland-catchment modelling study that capitalizes on recently developed inundation maps and other geospatial data. The aim of the data-model integration is to improve spatial prediction of wetland inundation and evaluate cumulative hydrological benefits at the catchment scale. In this paper, we highlight problems arising from data preparation, parameterization, and process representation in simulating wetlands within a distributed catchment model, and report the recent progress on mapping of wetland dynamics (i.e., inundation) using multiple remotely sensed data. We demonstrate the value of spatially explicit inundation information to develop site-specific wetland parameters and to evaluate model prediction at multi-spatial and temporal scales. This spatial data-model integrated framework is tested using Soil and Water Assessment Tool (SWAT) with improved wetland extension, and applied for an agricultural watershed in the Mid-Atlantic Coastal Plain, USA. This study illustrates necessity of spatially distributed information and a data integrated modelling approach to predict inundation of wetlands and hydrologic function at the local landscape scale, where monitoring and conservation decision making take place.

  4. Stochastic simulation of power systems with integrated renewable and utility-scale storage resources

    Science.gov (United States)

    Degeilh, Yannick

    The push for a more sustainable electric supply has led various countries to adopt policies advocating the integration of renewable yet variable energy resources, such as wind and solar, into the grid. The challenges of integrating such time-varying, intermittent resources has in turn sparked a growing interest in the implementation of utility-scale energy storage resources ( ESRs), with MWweek storage capability. Indeed, storage devices provide flexibility to facilitate the management of power system operations in the presence of uncertain, highly time-varying and intermittent renewable resources. The ability to exploit the potential synergies between renewable and ESRs hinges on developing appropriate models, methodologies, tools and policy initiatives. We report on the development of a comprehensive simulation methodology that provides the capability to quantify the impacts of integrated renewable and ESRs on the economics, reliability and emission variable effects of power systems operating in a market environment. We model the uncertainty in the demands, the available capacity of conventional generation resources and the time-varying, intermittent renewable resources, with their temporal and spatial correlations, as discrete-time random processes. We deploy models of the ESRs to emulate their scheduling and operations in the transmission-constrained hourly day-ahead markets. To this end, we formulate a scheduling optimization problem (SOP) whose solutions determine the operational schedule of the controllable ESRs in coordination with the demands and the conventional/renewable resources. As such, the SOP serves the dual purpose of emulating the clearing of the transmission-constrained day-ahead markets (DAMs ) and scheduling the energy storage resource operations. We also represent the need for system operators to impose stricter ramping requirements on the conventional generating units so as to maintain the system capability to perform "load following'', i

  5. Large scale continuous integration and delivery : Making great software better and faster

    NARCIS (Netherlands)

    Stahl, Daniel

    2017-01-01

    Since the inception of continuous integration, and later continuous delivery, the methods of producing software in the industry have changed dramatically over the last two decades. Automated, rapid and frequent compilation, integration, testing, analysis, packaging and delivery of new software

  6. ‘Slow’ Revitalization on Regional Scale, the Example of an Integrated Investment Project

    Science.gov (United States)

    Mazur-Belzyt, Katarzyna

    2017-10-01

    The study arose from question about the future of towns, as well as the possibility of their development. The paper is an attempt to look at the direction in which many towns around the world aim, connecting to a networks, and especially the network of Cittaslow. The author asked a few questions - whether the Cittaslow network actually helps towns to use their inner potential, build their brand and improve the quality of residents’ lifes? The starting point for the case study method adopted in the paper is a discussion of examples of urban networks as a background for a wider Cittaslow characteristic. For this purpose, there was conducted literature and in situ research on the Cittaslow towns, the query of documents related to Polish Cittaslow, own photographic documentation was collected and a series of talks were carried out in different offices and municipalities. The database constructed in this way, allowed the analysis and conclusions. An important part of the research was the synthesis of information on the integrated project which has been taken in 14 Polish Slow Cities. “The Cross-Local Programme of Revitalization of Cittaslow Town Network in the Warmian-Masurian Voivodeship” is a unique action on the scale of the entire international Cittaslow network. Each of the participating towns tried to exploit through revitalization its own unique potential for real growth and improve the quality of life of its residents. Through the joint action, even the smallest town could more easily obtain significant funding. The involvement of regional government and understanding of the idea was also crucial. Cittaslow network, although not perfect, may in the long term strengthen linkages and exchange of experience between the slow towns and not lead to their unification. Furthermore, as shown by the example of Polish “The Cross-Local Programme of Revitalization of Cittaslow Town Network in the Warmian-Masurian Voivodeship”, belonging to the Cittaslow network

  7. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    Science.gov (United States)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real

  8. VLSI Research

    Science.gov (United States)

    1984-04-01

    Interpretation of IMMEDIATE fields of instructions (except ldhi ): W (c) (d) (e) sssssssssssss s imml9 sssssssssssssssssss...s imml3 Destination REGISTER of a LDHI instruction: imml9 0000000000000 Data in REGISTERS when operated upon: 32-bit quantity...Oll x l OOOO OOOl calli sll OOlO getpsw sra xxzOOll getlpc srl OlOO putpsw ldhi OlOl and zzzOllO or ldxw stxw Olll xor

  9. VLSI Implementation of a Fixed-Complexity Soft-Output MIMO Detector for High-Speed Wireless

    Directory of Open Access Journals (Sweden)

    Di Wu

    2010-01-01

    Full Text Available This paper presents a low-complexity MIMO symbol detector with close-Maximum a posteriori performance for the emerging multiantenna enhanced high-speed wireless communications. The VLSI implementation is based on a novel MIMO detection algorithm called Modified Fixed-Complexity Soft-Output (MFCSO detection, which achieves a good trade-off between performance and implementation cost compared to the referenced prior art. By including a microcode-controlled channel preprocessing unit and a pipelined detection unit, it is flexible enough to cover several different standards and transmission schemes. The flexibility allows adaptive detection to minimize power consumption without degradation in throughput. The VLSI implementation of the detector is presented to show that real-time MIMO symbol detection of 20 MHz bandwidth 3GPP LTE and 10 MHz WiMAX downlink physical channel is achievable at reasonable silicon cost.

  10. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  11. A new VLSI complex integer multiplier which uses a quadratic-polynomial residue system with Fermat numbers

    Science.gov (United States)

    Shyu, H. C.; Reed, I. S.; Truong, T. K.; Hsu, I. S.; Chang, J. J.

    1987-01-01

    A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-Pw technology.

  12. TradeWind. Integrating wind. Developing Europe's power market for the large-scale integration of wind power. Final report

    Energy Technology Data Exchange (ETDEWEB)

    2009-02-15

    Based on a single European grid and power market system, the TradeWind project explores to what extent large-scale wind power integration challenges could be addressed by reinforcing interconnections between Member States in Europe. Additionally, the project looks at the conditions required for a sound power market design that ensures a cost-effective integration of wind power at EU level. In this way, the study addresses two issues of key importance for the future integration of renewable energy, namely the weak interconnectivity levels between control zones and the inflexibility and fragmented nature of the European power market. Work on critical transmission paths and interconnectors is slow for a variety of reasons including planning and administrative barriers, lack of public acceptance, insufficient economic incentives for TSOs, and the lack of a joint European approach by the key stakeholders. (au)

  13. Development and Preliminary Validation of the Scale for Evaluation of Psychiatric Integrative and Continuous Care—Patient’s Version

    Directory of Open Access Journals (Sweden)

    Yuriy Ignatyev

    2017-08-01

    Full Text Available This pilot study aimed to evaluate and examine an instrument that integrates relevant aspects of cross-sectoral (in- and outpatients mental health care, is simply to use and shows satisfactory psychometric properties. The development of the scale comprised literature research, held 14 focus groups and 12 interviews with patients and health care providers, item-pool generation, content validation by a scientific expert panel, and face validation by 90 patients. The preliminary scale was tested on 385 patients across seven German hospitals with cross-sectoral mental health care (CSMHC as part of their treatment program. Psychometric properties of the scale were evaluated using genuine and transformed data scoring. To check reliability and postdictive validity of the scale, Cronbach’s α coefficient and multivariable linear regression were used. This development process led to the development of an 18-item scale called the “Scale for Evaluation of Psychiatric Integrative and Continuous Care (SEPICC” with a two-point and five-point response options. The scale consists of two sections. The first section assesses the presence or absence of patients’ experiences with various CSMHC’ relevant components such as home treatment, flexibility of treatments’ switching, case management, continuity of care, cross-sectoral therapeutic groups, and multidisciplinary teams. The second section evaluates the patients’ opinions about these relevant components. Using raw and transformed scoring resulted into comparable results. However, data distribution using transformed scoring showed a smaller deviation from normality. For the overall scale, the Cronbach’s α coefficient was 0.82. Self-reported experiences with relevant components of the CSMHC were positively associated with the patients approval of these components. In conclusion, the new scale provides a good starting point for further validation. It can be used as a tool to evaluate CSMHC

  14. Data assimilation in optimizing and integrating soil and water quality water model predictions at different scales

    Science.gov (United States)

    Relevant data about subsurface water flow and solute transport at relatively large scales that are of interest to the public are inherently laborious and in most cases simply impossible to obtain. Upscaling in which fine-scale models and data are used to predict changes at the coarser scales is the...

  15. Implantable neurotechnologies: bidirectional neural interfaces--applications and VLSI circuit implementations.

    Science.gov (United States)

    Greenwald, Elliot; Masters, Matthew R; Thakor, Nitish V

    2016-01-01

    A bidirectional neural interface is a device that transfers information into and out of the nervous system. This class of devices has potential to improve treatment and therapy in several patient populations. Progress in very large-scale integration has advanced the design of complex integrated circuits. System-on-chip devices are capable of recording neural electrical activity and altering natural activity with electrical stimulation. Often, these devices include wireless powering and telemetry functions. This review presents the state of the art of bidirectional circuits as applied to neuroprosthetic, neurorepair, and neurotherapeutic systems.

  16. Aespoe Hard Rock Laboratory. Analysis of fracture networks based on the integration of structural and hydrogeological observations on different scales

    Energy Technology Data Exchange (ETDEWEB)

    Bossart, P. [Geotechnical Inst. Ltd., Bern (Switzerland); Hermanson, Jan [Golder Associates, Stockholm (Sweden); Mazurek, M. [Univ. of Bern (Switzerland)

    2001-05-01

    Fracture networks at Aespoe have been studied for several rock types exhibiting different degrees of ductile and brittle deformation, as well as on different scales. Mesoscopic fault systems have been characterised and classified in an earlier report, this report focuses mainly on fracture networks derived on smaller scales, but also includes mesoscopic and larger scales. The TRUE-1 block has been selected for detailed structural analysis on a small scale due to the high density of relevant information. In addition to the data obtained from core materials, structural maps, BIP data and the results of hydro tests were synthesised to derive a conceptual structural model. The approach used to derive this conceptual model is based on the integration of deterministic structural evidence, probabilistic information and both upscaling and downscaling of observations and concepts derived on different scales. Twelve fracture networks mapped at different sites and scales and exhibiting various styles of tectonic deformation were analysed for fractal properties and structural and hydraulic interconnectedness. It was shown that these analysed fracture networks are not self-similar. An important result is the structural and hydraulic interconnectedness of fracture networks on all scales in the Aespoe rocks, which is further corroborated by geochemical evidence. Due to the structural and hydraulic interconnectedness of fracture systems on all scales at Aespoe, contaminants from waste canisters placed in tectonically low deformation environments would be transported - after having passed through the engineered barriers -from low-permeability fractures towards higher permeability fractures and may thus eventually reach high-permeability features.

  17. Grounding the nexus: Examining the integration of small-scale irrigators into a national food security programme in Burkina Faso

    Directory of Open Access Journals (Sweden)

    Brian Dowd-Uribe

    2018-06-01

    Full Text Available The water-food nexus literature examines the synergies and trade-offs of resource use but is dominated by large-scale analyses that do not sufficiently engage the local dimensions of resource management. The research presented here addresses this gap with a local-scale analysis of integrated water and food management in Burkina Faso. Specifically, we analyse the implementation of a national food security campaign (Opération Bondofa to boost maize production in a subbasin that exhibits two important trends in Africa: a large increase in small-scale irrigators and the decentralisation of water management. As surface water levels dropped in the region, entities at different scales asserted increased control over water allocation, exposing the contested nature of new decentralised institutions, and powerful actors’ preference for local control. These scalar power struggles intersected with a lack of knowledge of small-scale irrigators’ cultural practices to produce an implementation and water allocation schedule that did match small-scale irrigator needs, resulting in low initial enthusiasm for the project. Increased attention from national governments to strengthen decentralised water management committees and spur greater knowledge of, and engagement with, small-scale irrigators can result in improved programme design to better incorporate small-scale irrigators into national food security campaigns.

  18. Aespoe Hard Rock Laboratory. Analysis of fracture networks based on the integration of structural and hydrogeological observations on different scales

    International Nuclear Information System (INIS)

    Bossart, P.; Hermanson, Jan; Mazurek, M.

    2001-05-01

    Fracture networks at Aespoe have been studied for several rock types exhibiting different degrees of ductile and brittle deformation, as well as on different scales. Mesoscopic fault systems have been characterised and classified in an earlier report, this report focuses mainly on fracture networks derived on smaller scales, but also includes mesoscopic and larger scales. The TRUE-1 block has been selected for detailed structural analysis on a small scale due to the high density of relevant information. In addition to the data obtained from core materials, structural maps, BIP data and the results of hydro tests were synthesised to derive a conceptual structural model. The approach used to derive this conceptual model is based on the integration of deterministic structural evidence, probabilistic information and both upscaling and downscaling of observations and concepts derived on different scales. Twelve fracture networks mapped at different sites and scales and exhibiting various styles of tectonic deformation were analysed for fractal properties and structural and hydraulic interconnectedness. It was shown that these analysed fracture networks are not self-similar. An important result is the structural and hydraulic interconnectedness of fracture networks on all scales in the Aespoe rocks, which is further corroborated by geochemical evidence. Due to the structural and hydraulic interconnectedness of fracture systems on all scales at Aespoe, contaminants from waste canisters placed in tectonically low deformation environments would be transported - after having passed through the engineered barriers -from low-permeability fractures towards higher permeability fractures and may thus eventually reach high-permeability features

  19. Toward Open Science at the European Scale: Geospatial Semantic Array Programming for Integrated Environmental Modelling

    Science.gov (United States)

    de Rigo, Daniele; Corti, Paolo; Caudullo, Giovanni; McInerney, Daniel; Di Leo, Margherita; San-Miguel-Ayanz, Jesús

    2013-04-01

    Interfacing science and policy raises challenging issues when large spatial-scale (regional, continental, global) environmental problems need transdisciplinary integration within a context of modelling complexity and multiple sources of uncertainty [1]. This is characteristic of science-based support for environmental policy at European scale [1], and key aspects have also long been investigated by European Commission transnational research [2-5]. Parameters ofthe neededdata- transformations ? = {?1????m} (a.5) Wide-scale transdisciplinary modelling for environment. Approaches (either of computational science or of policy-making) suitable at a given domain-specific scale may not be appropriate for wide-scale transdisciplinary modelling for environment (WSTMe) and corresponding policy-making [6-10]. In WSTMe, the characteristic heterogeneity of available spatial information (a) and complexity of the required data-transformation modelling (D- TM) appeal for a paradigm shift in how computational science supports such peculiarly extensive integration processes. In particular, emerging wide-scale integration requirements of typical currently available domain-specific modelling strategies may include increased robustness and scalability along with enhanced transparency and reproducibility [11-15]. This challenging shift toward open data [16] and reproducible research [11] (open science) is also strongly suggested by the potential - sometimes neglected - huge impact of cascading effects of errors [1,14,17-19] within the impressively growing interconnection among domain-specific computational models and frameworks. From a computational science perspective, transdisciplinary approaches to integrated natural resources modelling and management (INRMM) [20] can exploit advanced geospatial modelling techniques with an awesome battery of free scientific software [21,22] for generating new information and knowledge from the plethora of composite data [23-26]. From the perspective

  20. Heterogeneous integration of lithium niobate and silicon nitride waveguides for wafer-scale photonic integrated circuits on silicon.

    Science.gov (United States)

    Chang, Lin; Pfeiffer, Martin H P; Volet, Nicolas; Zervas, Michael; Peters, Jon D; Manganelli, Costanza L; Stanton, Eric J; Li, Yifei; Kippenberg, Tobias J; Bowers, John E

    2017-02-15

    An ideal photonic integrated circuit for nonlinear photonic applications requires high optical nonlinearities and low loss. This work demonstrates a heterogeneous platform by bonding lithium niobate (LN) thin films onto a silicon nitride (Si3N4) waveguide layer on silicon. It not only provides large second- and third-order nonlinear coefficients, but also shows low propagation loss in both the Si3N4 and the LN-Si3N4 waveguides. The tapers enable low-loss-mode transitions between these two waveguides. This platform is essential for various on-chip applications, e.g., modulators, frequency conversions, and quantum communications.

  1. Control Synthesis for the Flow-Based Microfluidic Large-Scale Integration Biochips

    DEFF Research Database (Denmark)

    Minhass, Wajid Hassan; Pop, Paul; Madsen, Jan

    2013-01-01

    In this paper we are interested in flow-based microfluidic biochips, which are able to integrate the necessary functions for biochemical analysis on-chip. In these chips, the flow of liquid is manipulated using integrated microvalves. By combining severalmicrovalves, more complex units, such asmi......In this paper we are interested in flow-based microfluidic biochips, which are able to integrate the necessary functions for biochemical analysis on-chip. In these chips, the flow of liquid is manipulated using integrated microvalves. By combining severalmicrovalves, more complex units...

  2. Electron beam effects on VLSI MOS conditions for testing and reconfiguration

    International Nuclear Information System (INIS)

    Girard, P.; Roche, F.M.; Pistoulet, B.

    1986-01-01

    Wafer scale integrated-MOS circuits problems related to test and reconfiguration by electron beams are analyzed. First of all the alterations in characteristics of MOS circuits submitted to an electron beam testing are considered. Then the capabilities of reconfiguration by an electron beam bombardment are discussed. The various phenomena involved are reviewed. Experimental data are reported and discussed on the light of data of the literature. (Auth.)

  3. Sensitivity of the two-dimensional shearless mixing layer to the initial turbulent kinetic energy and integral length scale

    Science.gov (United States)

    Fathali, M.; Deshiri, M. Khoshnami

    2016-04-01

    The shearless mixing layer is generated from the interaction of two homogeneous isotropic turbulence (HIT) fields with different integral scales ℓ1 and ℓ2 and different turbulent kinetic energies E1 and E2. In this study, the sensitivity of temporal evolutions of two-dimensional, incompressible shearless mixing layers to the parametric variations of ℓ1/ℓ2 and E1/E2 is investigated. The sensitivity methodology is based on the nonintrusive approach; using direct numerical simulation and generalized polynomial chaos expansion. The analysis is carried out at Reℓ 1=90 for the high-energy HIT region and different integral length scale ratios 1 /4 ≤ℓ1/ℓ2≤4 and turbulent kinetic energy ratios 1 ≤E1/E2≤30 . It is found that the most influential parameter on the variability of the mixing layer evolution is the turbulent kinetic energy while variations of the integral length scale show a negligible influence on the flow field variability. A significant level of anisotropy and intermittency is observed in both large and small scales. In particular, it is found that large scales have higher levels of intermittency and sensitivity to the variations of ℓ1/ℓ2 and E1/E2 compared to the small scales. Reconstructed response surfaces of the flow field intermittency and the turbulent penetration depth show monotonic dependence on ℓ1/ℓ2 and E1/E2 . The mixing layer growth rate and the mixing efficiency both show sensitive dependence on the initial condition parameters. However, the probability density function of these quantities shows relatively small solution variations in response to the variations of the initial condition parameters.

  4. Integral emission factors for methane determined using urban flux measurements and local-scale inverse models

    Science.gov (United States)

    Christen, Andreas; Johnson, Mark; Molodovskaya, Marina; Ketler, Rick; Nesic, Zoran; Crawford, Ben; Giometto, Marco; van der Laan, Mike

    2013-04-01

    The most important long-lived greenhouse gas (LLGHG) emitted during combustion of fuels is carbon dioxide (CO2), however also traces of the LLGHGs methane (CH4) and nitrous oxide (N2O) are released, the quantities of which depend largely on the conditions of the combustion process. Emission factors determine the mass of LLGHGs emitted per energy used (or kilometre driven for cars) and are key inputs for bottom-up emission modelling. Emission factors for CH4 are typically determined in the laboratory or on a test stand for a given combustion system using a small number of samples (vehicles, furnaces), yet associated with larger uncertainties when scaled to entire fleets. We propose an alternative, different approach - Can integrated emission factors be independently determined using direct micrometeorological flux measurements over an urban surface? If so, do emission factors determined from flux measurements (top-down) agree with up-scaled emission factors of relevant combustion systems (heating, vehicles) in the source area of the flux measurement? Direct flux measurements of CH4 were carried out between February and May, 2012 over a relatively densely populated, urban surface in Vancouver, Canada by means of eddy covariance (EC). The EC-system consisted of an ultrasonic anemometer (CSAT-3, Campbell Scientific Inc.) and two open-path infrared gas analyzers (Li7500 and Li7700, Licor Inc.) on a tower at 30m above the surface. The source area of the EC system is characterised by a relative homogeneous morphometry (5.3m average building height), but spatially and temporally varying emission sources, including two major intersecting arterial roads (70.000 cars drive through the 50% source area per day) and seasonal heating in predominantly single-family houses (natural gas). An inverse dispersion model (turbulent source area model), validated against large eddy simulations (LES) of the urban roughness sublayer, allows the determination of the spatial area that

  5. Design, development and integration of a large scale multiple source X-ray computed tomography system

    International Nuclear Information System (INIS)

    Malcolm, Andrew A.; Liu, Tong; Ng, Ivan Kee Beng; Teng, Wei Yuen; Yap, Tsi Tung; Wan, Siew Ping; Kong, Chun Jeng

    2013-01-01

    X-ray Computed Tomography (CT) allows visualisation of the physical structures in the interior of an object without physically opening or cutting it. This technology supports a wide range of applications in the non-destructive testing, failure analysis or performance evaluation of industrial products and components. Of the numerous factors that influence the performance characteristics of an X-ray CT system the energy level in the X-ray spectrum to be used is one of the most significant. The ability of the X-ray beam to penetrate a given thickness of a specific material is directly related to the maximum available energy level in the beam. Higher energy levels allow penetration of thicker components made of more dense materials. In response to local industry demand and in support of on-going research activity in the area of 3D X-ray imaging for industrial inspection the Singapore Institute of Manufacturing Technology (SIMTech) engaged in the design, development and integration of large scale multiple source X-ray computed tomography system based on X-ray sources operating at higher energies than previously available in the Institute. The system consists of a large area direct digital X-ray detector (410 x 410 mm), a multiple-axis manipulator system, a 225 kV open tube microfocus X-ray source and a 450 kV closed tube millifocus X-ray source. The 225 kV X-ray source can be operated in either transmission or reflection mode. The body of the 6-axis manipulator system is fabricated from heavy-duty steel onto which high precision linear and rotary motors have been mounted in order to achieve high accuracy, stability and repeatability. A source-detector distance of up to 2.5 m can be achieved. The system is controlled by a proprietary X-ray CT operating system developed by SIMTech. The system currently can accommodate samples up to 0.5 x 0.5 x 0.5 m in size with weight up to 50 kg. These specifications will be increased to 1.0 x 1.0 x 1.0 m and 100 kg in future

  6. Experience of Integrated Safeguards Approach for Large-scale Hot Cell Laboratory

    International Nuclear Information System (INIS)

    Miyaji, N.; Kawakami, Y.; Koizumi, A.; Otsuji, A.; Sasaki, K.

    2010-01-01

    The Japan Atomic Energy Agency (JAEA) has been operating a large-scale hot cell laboratory, the Fuels Monitoring Facility (FMF), located near the experimental fast reactor Joyo at the Oarai Research and Development Center (JNC-2 site). The FMF conducts post irradiation examinations (PIE) of fuel assemblies irradiated in Joyo. The assemblies are disassembled and non-destructive examinations, such as X-ray computed tomography tests, are carried out. Some of the fuel pins are cut into specimens and destructive examinations, such as ceramography and X-ray micro analyses, are performed. Following PIE, the tested material, in the form of a pin or segments, is shipped back to a Joyo spent fuel pond. In some cases, after reassembly of the examined irradiated fuel pins is completed, the fuel assemblies are shipped back to Joyo for further irradiation. For the IAEA to apply the integrated safeguards approach (ISA) to the FMF, a new verification system on material shipping and receiving process between Joyo and the FMF has been established by the IAEA under technical collaboration among the Japan Safeguard Office (JSGO) of MEXT, the Nuclear Material Control Center (NMCC) and the JAEA. The main concept of receipt/shipment verification under the ISA for JNC-2 site is as follows: under the IS, the FMF is treated as a Joyo-associated facility in terms of its safeguards system because it deals with the same spent fuels. Verification of the material shipping and receiving process between Joyo and the FMF can only be applied to the declared transport routes and transport casks. The verification of the nuclear material contained in the cask is performed with the method of gross defect at the time of short notice random interim inspections (RIIs) by measuring the surface neutron dose rate of the cask, filled with water to reduce radiation. The JAEA performed a series of preliminary tests with the IAEA, the JSGO and the NMCC, and confirmed from the standpoint of the operator that this

  7. Critical Causes of Degradation in Integrated Laboratory Scale Cells during High Temperature Electrolysis

    Energy Technology Data Exchange (ETDEWEB)

    M.S. Sohal; J.E. O' Brien; C.M. Stoots; J. J. Hartvigsen; D. Larsen; S. Elangovan; J.S. Herring; J.D. Carter; V.I. Sharma; B. Yildiz

    2009-05-01

    An ongoing project at Idaho National Laboratory involves generating hydrogen from steam using solid oxide electrolysis cells (SOEC). This report describes background information about SOECs, the Integrated Laboratory Scale (ILS) testing of solid-oxide electrolysis stacks, ILS performance degradation, and post-test examination of SOECs by various researchers. The ILS test was a 720- cell, three-module test comprised of 12 stacks of 60 cells each. A peak H2 production rate of 5.7 Nm3/hr was achieved. Initially, the module area-specific resistance ranged from 1.25 Ocm2 to just over 2 Ocm2. Total H2 production rate decreased from 5.7 Nm3/hr to a steady state value of 0.7 Nm3/hr. The decrease was primarily due to cell degradation. Post test examination by Ceramatec showed that the hydrogen electrode appeared to be in good condition. The oxygen evolution electrode does show delamination in operation and an apparent foreign layer deposited at the electrolyte interface. Post test examination by Argonne National Laboratory showed that the O2-electrode delaminated from the electrolyte near the edge. One possible reason for this delamination is excessive pressure buildup with high O2 flow in the over-sintered region. According to post test examination at the Massachusetts Institute of Technology, the electrochemical reactions have been recognized as one of the prevalent causes of their degradation. Specifically, two important degradation mechanisms were examined: (1) transport of Crcontaining species from steel interconnects into the oxygen electrode and LSC bond layers in SOECs, and (2) cation segregation and phase separation in the bond layer. INL conducted a workshop October 27, 2008 to discuss possible causes of degradation in a SOEC stack. Generally, it was agreed that the following are major degradation issues relating to SOECs: • Delamination of the O2-electrode and bond layer on the steam/O2-electrode side • Contaminants (Ni, Cr, Si, etc.) on reaction sites

  8. PIV measurements of the turbulence integral length scale on cold combustion flow field of tangential firing boiler

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Wen-fei; Xie, Jing-xing; Gong, Zhi-jun; Li, Bao-wei [Inner Mongolia Univ. of Science and Technology, Baotou (China). Inner Mongolia Key Lab. for Utilization of Bayan Obo Multi-Metallic Resources: Elected State Key Lab.

    2013-07-01

    The process of the pulverized coal combustion in tangential firing boiler has prominent significance on improving boiler operation efficiency and reducing NO{sub X} emission. This paper aims at researching complex turbulent vortex coherent structure formed by the four corners jets in the burner zone, a cold experimental model of tangential firing boiler has been built. And by employing spatial correlation analysis method and PIV (Particle Image Velocimetry) technique, the law of Vortex scale distribution on the three typical horizontal layers of the model based on the turbulent Integral Length Scale (ILS) has been researched. According to the correlation analysis of ILS and the temporal average velocity, it can be seen that the turbulent vortex scale distribution in the burner zone of the model is affected by both jet velocity and the position of wind layers, and is not linear with the variation of jet velocity. The vortex scale distribution of the upper primary air is significantly different from the others. Therefore, studying the ILS of turbulent vortex integral scale is instructive to high efficiency cleaning combustion of pulverized coal in theory.

  9. Temporal integration of loudness measured using categorical loudness scaling and matching procedures

    DEFF Research Database (Denmark)

    Valente, Daniel L.; Joshi, Suyash Narendra; Jesteadt, Walt

    2011-01-01

    integration of loudness and previously reported nonmonotonic behavior observed at mid-sound pressure level levels is replicated with this procedure. Stimuli that are assigned to the same category are effectively matched in loudness, allowing the measurement of temporal integration with CLS without curve...

  10. Wafer Scale Integration of CMOS Chips for Biomedical Applications via Self-Aligned Masking.

    Science.gov (United States)

    Uddin, Ashfaque; Milaninia, Kaveh; Chen, Chin-Hsuan; Theogarajan, Luke

    2011-12-01

    This paper presents a novel technique for the integration of small CMOS chips into a large area substrate. A key component of the technique is the CMOS chip based self-aligned masking. This allows for the fabrication of sockets in wafers that are at most 5 µm larger than the chip on each side. The chip and the large area substrate are bonded onto a carrier such that the top surfaces of the two components are flush. The unique features of this technique enable the integration of macroscale components, such as leads and microfluidics. Furthermore, the integration process allows for MEMS micromachining after CMOS die-wafer integration. To demonstrate the capabilities of the proposed technology, a low-power integrated potentiostat chip for biosensing implemented in the AMI 0.5 µm CMOS technology is integrated in a silicon substrate. The horizontal gap and the vertical displacement between the chip and the large area substrate measured after the integration were 4 µm and 0.5 µm, respectively. A number of 104 interconnects are patterned with high-precision alignment. Electrical measurements have shown that the functionality of the chip is not affected by the integration process.

  11. Scaling Up Renewable Energy Generation: Aligning Targets and Incentives with Grid Integration Considerations, Greening The Grid

    Energy Technology Data Exchange (ETDEWEB)

    Katz, Jessica; Cochran, Jaquelin

    2015-05-27

    Greening the Grid provides technical assistance to energy system planners, regulators, and grid operators to overcome challenges associated with integrating variable renewable energy into the grid. This document, part of a Greening the Grid toolkit, provides power system planners with tips to help secure and sustain investment in new renewable energy generation by aligning renewable energy policy targets and incentives with grid integration considerations.

  12. Stepwise integral scaling method for severe accident analysis and its application to corium dispersion in direct containment heating

    International Nuclear Information System (INIS)

    Ishii, M.; Zhang, G.; No, H. C.; Eltwila, F.

    1994-01-01

    Accident sequences which lead to severe core damage and to possible radioactive fission products into the environment have a very low probability. However, the interest in this area increased significantly due to the occurrence of the small break loss-of-coolant accident at TMI-2 which led to partial core damage, and of the Chernobyl accident in the former USSR which led to extensive core disassembly and significant release of fission products over several countries. In particular, the latter accident raised the international concern over the potential consequences of severe accidents in nuclear reactor systems. One of the significant shortcomings in the analyses of severe accidents is the lack of well-established and reliable scaling criteria for various multiphase flow phenomena. However, the scaling criteria are essential to the severe accident, because the full scale tests are basically impossible to perform. They are required for (1) designing scaled down or simulation experiments, (2) evaluating data and extrapolating the data to prototypic conditions, and (3) developing correctly scaled physical models and correlations. In view of this, a new scaling method is developed for the analysis of severe accidents. Its approach is quite different from the conventional methods. In order to demonstrate its applicability, this new stepwise integral scaling method has been applied to the analysis of the corium dispersion problem in the direct containment heating. ((orig.))

  13. Integrating regional and continental scale comparisons of tree composition in Amazonian terra firme forests

    Science.gov (United States)

    Honorio Coronado, E. N.; Baker, T. R.; Phillips, O. L.; Pitman, N. C. A.; Pennington, R. T.; Vásquez Martínez, R.; Monteagudo, A.; Mogollón, H.; Dávila Cardozo, N.; Ríos, M.; García-Villacorta, R.; Valderrama, E.; Ahuite, M.; Huamantupa, I.; Neill, D. A.; Laurance, W. F.; Nascimento, H. E. M.; Soares de Almeida, S.; Killeen, T. J.; Arroyo, L.; Núñez, P.; Freitas Alvarado, L.

    2009-01-01

    We contrast regional and continental-scale comparisons of the floristic composition of terra firme forest in South Amazonia, using 55 plots across Amazonia and a subset of 30 plots from northern Peru and Ecuador. Firstly, we examine the floristic patterns using both genus- or species-level data and find that the species-level analysis more clearly distinguishes different plot clusters. Secondly, we compare the patterns and causes of floristic differences at regional and continental scales. At a continental scale, ordination analysis shows that species of Lecythidaceae and Sapotaceae are gradually replaced by species of Arecaceae and Myristicaceae from eastern to western Amazonia. These floristic gradients are correlated with gradients in soil fertility and to dry season length, similar to previous studies. At a regional scale, similar patterns are found within north-western Amazonia, where differences in soil fertility distinguish plots where species of Lecythidaceae, characteristic of poor soils, are gradually replaced by species of Myristicaceae on richer soils. The main coordinate of this regional-scale ordination correlates mainly with concentrations of available calcium and magnesium. Thirdly, we ask at a regional scale within north-western Amazonia, whether soil fertility or other distance dependent processes are more important for determining variation in floristic composition. A Mantel test indicates that both soils and geographical distance have a similar and significant role in determining floristic similarity across this region. Overall, these results suggest that regional-scale variation in floristic composition can rival continental scale differences within Amazonian terra firme forests, and that variation in floristic composition at both scales is dependent on a range of processes that include both habitat specialisation related to edaphic conditions and other distance-dependent processes. To fully account for regional scale variation in continental

  14. Instanton scale cutoff due to the introduction of link constraints in the functional integral

    International Nuclear Information System (INIS)

    Bachas, C.P.

    1981-01-01

    We compute the contribution of instantons of fixed scale rho to the Wilson loop of a square (plaquette) of size a, and compare the result to its asymptotic forms in the large- and small-a/rho limits. We deduce that the scale cutoff of instantons renormalizing the coupling of an effective lattice theory lies between 2a/3 and a

  15. The Conscientious Responders Scale Helps Researchers Verify the Integrity of Personality Questionnaire Data.

    Science.gov (United States)

    Marjanovic, Zdravko; Bajkov, Lisa; MacDonald, Jennifer

    2018-01-01

    The Conscientious Responders Scale is a five-item embeddable validity scale that differentiates between conscientious and indiscriminate responding in personality-questionnaire data (CR & IR). This investigation presents further evidence of its validity and generalizability across two experiments. Study 1 tests its sensitivity to questionnaire length, a known cause of IR, and tries to provoke IR by manipulating psychological reactance. As expected, short questionnaires produced higher Conscientious Responders Scale scores than long questionnaires, and Conscientious Responders Scale scores were unaffected by reactance manipulations. Study 2 tests concerns that the Conscientious Responders Scale's unusual item content could potentially irritate and baffle responders, ironically increasing rates of IR. We administered two nearly identical questionnaires: one with an embedded Conscientious Responders Scale and one without the Conscientious Responders Scale. Psychometric comparisons revealed no differences across questionnaires' means, variances, interitem response consistencies, and Cronbach's alphas. In sum, the Conscientious Responders Scale is highly sensitive to questionnaire length-a known correlate of IR-and can be embedded harmlessly in questionnaires without provoking IR or changing the psychometrics of other measures.

  16. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J.E. [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs.

  17. A promising future for integrative biodiversity research: an increased role of scale-dependency and functional biology

    Science.gov (United States)

    Schmitz, L.

    2016-01-01

    Studies into the complex interaction between an organism and changes to its biotic and abiotic environment are fundamental to understanding what regulates biodiversity. These investigations occur at many phylogenetic, temporal and spatial scales and within a variety of biological and geological disciplines but often in relative isolation. This issue focuses on what can be achieved when ecological mechanisms are integrated into analyses of deep-time biodiversity patterns through the union of fossil and extant data and methods. We expand upon this perspective to argue that, given its direct relevance to the current biodiversity crisis, greater integration is needed across biodiversity research. We focus on the need to understand scaling effects, how lower-level ecological and evolutionary processes scale up and vice versa, and the importance of incorporating functional biology. Placing function at the core of biodiversity research is fundamental, as it establishes how an organism interacts with its abiotic and biotic environment and it is functional diversity that ultimately determines important ecosystem processes. To achieve full integration, concerted and ongoing efforts are needed to build a united and interactive community of biodiversity researchers, with education and interdisciplinary training at its heart. PMID:26977068

  18. A promising future for integrative biodiversity research: an increased role of scale-dependency and functional biology.

    Science.gov (United States)

    Price, S A; Schmitz, L

    2016-04-05

    Studies into the complex interaction between an organism and changes to its biotic and abiotic environment are fundamental to understanding what regulates biodiversity. These investigations occur at many phylogenetic, temporal and spatial scales and within a variety of biological and geological disciplines but often in relative isolation. This issue focuses on what can be achieved when ecological mechanisms are integrated into analyses of deep-time biodiversity patterns through the union of fossil and extant data and methods. We expand upon this perspective to argue that, given its direct relevance to the current biodiversity crisis, greater integration is needed across biodiversity research. We focus on the need to understand scaling effects, how lower-level ecological and evolutionary processes scale up and vice versa, and the importance of incorporating functional biology. Placing function at the core of biodiversity research is fundamental, as it establishes how an organism interacts with its abiotic and biotic environment and it is functional diversity that ultimately determines important ecosystem processes. To achieve full integration, concerted and ongoing efforts are needed to build a united and interactive community of biodiversity researchers, with education and interdisciplinary training at its heart. © 2016 The Author(s).

  19. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    International Nuclear Information System (INIS)

    Geier, J.E.

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs

  20. IMGMD: A platform for the integration and standardisation of In silico Microbial Genome-scale Metabolic Models.

    Science.gov (United States)

    Ye, Chao; Xu, Nan; Dong, Chuan; Ye, Yuannong; Zou, Xuan; Chen, Xiulai; Guo, Fengbiao; Liu, Liming

    2017-04-07

    Genome-scale metabolic models (GSMMs) constitute a platform that combines genome sequences and detailed biochemical information to quantify microbial physiology at the system level. To improve the unity, integrity, correctness, and format of data in published GSMMs, a consensus IMGMD database was built in the LAMP (Linux + Apache + MySQL + PHP) system by integrating and standardizing 328 GSMMs constructed for 139 microorganisms. The IMGMD database can help microbial researchers download manually curated GSMMs, rapidly reconstruct standard GSMMs, design pathways, and identify metabolic targets for strategies on strain improvement. Moreover, the IMGMD database facilitates the integration of wet-lab and in silico data to gain an additional insight into microbial physiology. The IMGMD database is freely available, without any registration requirements, at http://imgmd.jiangnan.edu.cn/database.

  1. Data Portal for the Library of Integrated Network-based Cellular Signatures (LINCS) program: integrated access to diverse large-scale cellular perturbation response data

    Science.gov (United States)

    Koleti, Amar; Terryn, Raymond; Stathias, Vasileios; Chung, Caty; Cooper, Daniel J; Turner, John P; Vidović, Dušica; Forlin, Michele; Kelley, Tanya T; D’Urso, Alessandro; Allen, Bryce K; Torre, Denis; Jagodnik, Kathleen M; Wang, Lily; Jenkins, Sherry L; Mader, Christopher; Niu, Wen; Fazel, Mehdi; Mahi, Naim; Pilarczyk, Marcin; Clark, Nicholas; Shamsaei, Behrouz; Meller, Jarek; Vasiliauskas, Juozas; Reichard, John; Medvedovic, Mario; Ma’ayan, Avi; Pillai, Ajay

    2018-01-01

    Abstract The Library of Integrated Network-based Cellular Signatures (LINCS) program is a national consortium funded by the NIH to generate a diverse and extensive reference library of cell-based perturbation-response signatures, along with novel data analytics tools to improve our understanding of human diseases at the systems level. In contrast to other large-scale data generation efforts, LINCS Data and Signature Generation Centers (DSGCs) employ a wide range of assay technologies cataloging diverse cellular responses. Integration of, and unified access to LINCS data has therefore been particularly challenging. The Big Data to Knowledge (BD2K) LINCS Data Coordination and Integration Center (DCIC) has developed data standards specifications, data processing pipelines, and a suite of end-user software tools to integrate and annotate LINCS-generated data, to make LINCS signatures searchable and usable for different types of users. Here, we describe the LINCS Data Portal (LDP) (http://lincsportal.ccs.miami.edu/), a unified web interface to access datasets generated by the LINCS DSGCs, and its underlying database, LINCS Data Registry (LDR). LINCS data served on the LDP contains extensive metadata and curated annotations. We highlight the features of the LDP user interface that is designed to enable search, browsing, exploration, download and analysis of LINCS data and related curated content. PMID:29140462

  2. Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed; Al Farhan, Mohammed; Al-Harthi, Noha A.; Chen, Rui; Yokota, Rio; Bagci, Hakan; Keyes, David E.

    2018-01-01

    scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory

  3. The application of J integral to measure cohesive laws in materials undergoing large scale yielding

    DEFF Research Database (Denmark)

    Sørensen, Bent F.; Goutianos, Stergios

    2015-01-01

    We explore the possibility of determining cohesive laws by the J-integral approach for materials having non-linear stress-strain behaviour (e.g. polymers and composites) by the use of a DCB sandwich specimen, consisting of stiff elastic beams bonded to the non-linear test material, loaded with pure...... bending moments. For a wide range of parameters of the non-linear material, the plastic unloading during crack extension is small, resulting in J integral values (fracture resistance) that deviate maximum 15% from the work of the cohesive traction. Thus the method can be used to extract the cohesive laws...... directly from experiments without any presumption about their shape. Finally, the DCB sandwich specimen was also analysed using the I integral to quantify the overestimation of the steady-state fracture resistance obtained using the J integral based method....

  4. Two new characterizations of universal integrals on the scale [ 0, 1

    Czech Academy of Sciences Publication Activity Database

    Greco, S.; Mesiar, Radko; Rindone, F.

    2014-01-01

    Roč. 267, č. 1 (2014), s. 217-224 ISSN 0020-0255 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : universal integral * non-additive integral * fuzzy measure Subject RIV: BA - General Mathematics Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0432225.pdf

  5. ELT-scale Adaptive Optics real-time control with thes Intel Xeon Phi Many Integrated Core Architecture

    Science.gov (United States)

    Jenkins, David R.; Basden, Alastair; Myers, Richard M.

    2018-05-01

    We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.

  6. Australian national networked tele-test facility for integrated systems

    Science.gov (United States)

    Eshraghian, Kamran; Lachowicz, Stefan W.; Eshraghian, Sholeh

    2001-11-01

    The Australian Commonwealth government recently announced a grant of 4.75 million as part of a 13.5 million program to establish a world class networked IC tele-test facility in Australia. The facility will be based on a state-of-the-art semiconductor tester located at Edith Cowan University in Perth that will operate as a virtual centre spanning Australia. Satellite nodes will be located at the University of Western Australia, Griffith University, Macquarie University, Victoria University and the University of Adelaide. The facility will provide vital equipment to take Australia to the frontier of critically important and expanding fields in microelectronics research and development. The tele-test network will provide state of the art environment for the electronics and microelectronics research and the industry community around Australia to test and prototype Very Large Scale Integrated (VLSI) circuits and other System On a Chip (SOC) devices, prior to moving to the manufacturing stage. Such testing is absolutely essential to ensure that the device performs to specification. This paper presents the current context in which the testing facility is being established, the methodologies behind the integration of design and test strategies and the target shape of the tele-testing Facility.

  7. Factorial and construct validity of the revised short form integrative psychotherapy alliance scales for family, couple, and individual therapy.

    Science.gov (United States)

    Pinsof, William M; Zinbarg, Richard; Knobloch-Fedders, Lynne M

    2008-09-01

    The Integrative Psychotherapy Alliance model brought an interpersonal and systemic perspective to bear on theory, research, and practice on the psychotherapeutic alliance. Questions have been raised about the independence of the theoretical factors in the model and their operationalization in the Individual, Couple, and Family Therapy Alliance Scales. This paper presents results of a confirmatory factor analysis of the scales that delineated at least three distinct interpersonal factors as well as shorter versions of the three scales to facilitate their use in research and practice. The paper also presents the results of a study testing each factor's association with client retention and progress over the first eight sessions in individual and couple therapy. At least two of the interpersonal factors were uniquely associated with progress in individual and couple functioning. Implications of the results for theory, research, practice, and training in individual, couple, and family therapy are elaborated.

  8. Effects of large scale integration of wind and solar energy in Japan

    Science.gov (United States)

    Esteban, Miguel; Zhang, Qi; Utama, Agya; Tezuka, Tetsuo; Ishihara, Keiichi

    2010-05-01

    results for the country as a whole are considered it is still substantial. The results are greatly dependant on the mix between the proposed renewables (solar and wind), and by comparing different distributions and mixes, the optimum composition for the target country can be established. The methodology proposed is able to obtain the optimum mix of solar and wind power for a given system, provided that adequate storage capacity exists to allow for excess capacity to be used at times of low electricity production (at the comparatively rare times when there is neither enough sun nor wind throughout the country). This highlights the challenges of large-scale integration of renewable technologies into the electricity grid, and the necessity to combine such a system with other renewables such as hydro or ocean energy to further even out the peaks and lows in the demand.

  9. Scaling Down to Scale Up: A Health Economic Analysis of Integrating Point-of-Care Syphilis Testing into Antenatal Care in Zambia during Pilot and National Rollout Implementation.

    Directory of Open Access Journals (Sweden)

    Katharine D Shelley

    Full Text Available Maternal syphilis results in an estimated 500,000 stillbirths and neonatal deaths annually in Sub-Saharan Africa. Despite the existence of national guidelines for antenatal syphilis screening, syphilis testing is often limited by inadequate laboratory and staff services. Recent availability of inexpensive rapid point-of-care syphilis tests (RST can improve access to antenatal syphilis screening. A 2010 pilot in Zambia explored the feasibility of integrating RST within prevention of mother-to-child-transmission of HIV services. Following successful demonstration, the Zambian Ministry of Health adopted RSTs into national policy in 2011. Cost data from the pilot and 2012 preliminary national rollout were extracted from project records, antenatal registers, clinic staff interviews, and facility observations, with the aim of assessing the cost and quality implications of scaling up a successful pilot into a national rollout. Start-up, capital, and recurrent cost inputs were collected, including costs of extensive supervision and quality monitoring during the pilot. Costs were analysed from a provider's perspective, incremental to existing antenatal services. Total and unit costs were calculated and a multivariate sensitivity analysis was performed. Our accompanying qualitative study by Ansbro et al. (2015 elucidated quality assurance and supervisory system challenges experienced during rollout, which helped explain key cost drivers. The average unit cost per woman screened during rollout ($11.16 was more than triple the pilot unit cost ($3.19. While quality assurance costs were much lower during rollout, the increased unit costs can be attributed to several factors, including higher RST prices and lower RST coverage during rollout, which reduced economies of scale. Pilot and rollout cost drivers differed due to implementation decisions related to training, supervision, and quality assurance. This study explored the cost of integrating RST into

  10. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    Science.gov (United States)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  11. Prototype architecture for a VLSI level zero processing system. [Space Station Freedom

    Science.gov (United States)

    Shi, Jianfei; Grebowsky, Gerald J.; Horner, Ward P.; Chesney, James R.

    1989-01-01

    The prototype architecture and implementation of a high-speed level zero processing (LZP) system are discussed. Due to the new processing algorithm and VLSI technology, the prototype LZP system features compact size, low cost, high processing throughput, and easy maintainability and increased reliability. Though extensive control functions have been done by hardware, the programmability of processing tasks makes it possible to adapt the system to different data formats and processing requirements. It is noted that the LZP system can handle up to 8 virtual channels and 24 sources with combined data volume of 15 Gbytes per orbit. For greater demands, multiple LZP systems can be configured in parallel, each called a processing channel and assigned a subset of virtual channels. The telemetry data stream will be steered into different processing channels in accordance with their virtual channel IDs. This super system can cope with a virtually unlimited number of virtual channels and sources. In the near future, it is expected that new disk farms with data rate exceeding 150 Mbps will be available from commercial vendors due to the advance in disk drive technology.

  12. VLSI design of an RSA encryption/decryption chip using systolic array based architecture

    Science.gov (United States)

    Sun, Chi-Chia; Lin, Bor-Shing; Jan, Gene Eu; Lin, Jheng-Yi

    2016-09-01

    This article presents the VLSI design of a configurable RSA public key cryptosystem supporting the 512-bit, 1024-bit and 2048-bit based on Montgomery algorithm achieving comparable clock cycles of current relevant works but with smaller die size. We use binary method for the modular exponentiation and adopt Montgomery algorithm for the modular multiplication to simplify computational complexity, which, together with the systolic array concept for electric circuit designs effectively, lower the die size. The main architecture of the chip consists of four functional blocks, namely input/output modules, registers module, arithmetic module and control module. We applied the concept of systolic array to design the RSA encryption/decryption chip by using VHDL hardware language and verified using the TSMC/CIC 0.35 m 1P4 M technology. The die area of the 2048-bit RSA chip without the DFT is 3.9 × 3.9 mm2 (4.58 × 4.58 mm2 with DFT). Its average baud rate can reach 10.84 kbps under a 100 MHz clock.

  13. Emergent auditory feature tuning in a real-time neuromorphic VLSI system

    Directory of Open Access Journals (Sweden)

    Sadique eSheik

    2012-02-01

    Full Text Available Many sounds of ecological importance, such as communication calls, are characterised by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamocortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP, which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectrotemporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step towards the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  14. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Science.gov (United States)

    McEwan, Alistair; van Schaik, André

    2003-12-01

    The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a) rate level functions for onset and steady-state response, (b) recovery after masking, (c) additivity, (d) two-component adaptation, (e) phase locking, (f) recovery of spontaneous activity, and (g) computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  15. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Directory of Open Access Journals (Sweden)

    Alistair McEwan

    2003-06-01

    Full Text Available The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a rate level functions for onset and steady-state response, (b recovery after masking, (c additivity, (d two-component adaptation, (e phase locking, (f recovery of spontaneous activity, and (g computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  16. Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.

    Science.gov (United States)

    Yu, Theodore; Cauwenberghs, Gert

    2009-01-01

    We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.

  17. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System.

    Science.gov (United States)

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas; Chicca, Elisabetta

    2012-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  18. CASTOR a VLSI CMOS mixed analog-digital circuit for low noise multichannel counting applications

    International Nuclear Information System (INIS)

    Comes, G.; Loddo, F.; Hu, Y.; Kaplon, J.; Ly, F.; Turchetta, R.; Bonvicini, V.; Vacchi, A.

    1996-01-01

    In this paper we present the design and first experimental results of a VLSI mixed analog-digital 1.2 microns CMOS circuit (CASTOR) for multichannel radiation detectors applications demanding low noise amplification and counting of radiation pulses. This circuit is meant to be connected to pixel-like detectors. Imaging can be obtained by counting the number of hits in each pixel during a user-controlled exposure time. Each channel of the circuit features an analog and a digital part. In the former one, a charge preamplifier is followed by a CR-RC shaper with an output buffer and a threshold discriminator. In the digital part, a 16-bit counter is present together with some control logic. The readout of the counters is done serially on a common tri-state output. Daisy-chaining is possible. A 4-channel prototype has been built. This prototype has been optimised for use in the digital radiography Syrmep experiment at the Elettra synchrotron machine in Trieste (Italy): its main design parameters are: shaping time of about 850 ns, gain of 190 mV/fC and ENC (e - rms)=60+17 C (pF). The counting rate per channel, limited by the analog part, can be as high as about 200 kHz. Characterisation of the circuit and first tests with silicon microstrip detectors are presented. They show the circuit works according to design specification and can be used for imaging applications. (orig.)

  19. VLSI ARCHITECTURE FOR IMAGE COMPRESSION THROUGH ADDER MINIMIZATION TECHNIQUE AT DCT STRUCTURE

    Directory of Open Access Journals (Sweden)

    N.R. Divya

    2014-08-01

    Full Text Available Data compression plays a vital role in multimedia devices to present the information in a succinct frame. Initially, the DCT structure is used for Image compression, which has lesser complexity and area efficient. Similarly, 2D DCT also has provided reasonable data compression, but implementation concern, it calls more multipliers and adders thus its lead to acquire more area and high power consumption. To contain an account of all, this paper has been dealt with VLSI architecture for image compression using Rom free DA based DCT (Discrete Cosine Transform structure. This technique provides high-throughput and most suitable for real-time implementation. In order to achieve this image matrix is subdivided into odd and even terms then the multiplication functions are removed by shift and add approach. Kogge_Stone_Adder techniques are proposed for obtaining a bit-wise image quality which determines the new trade-off levels as compared to the previous techniques. Overall the proposed architecture produces reduced memory, low power consumption and high throughput. MATLAB is used as a funding tool for receiving an input pixel and obtaining output image. Verilog HDL is used for implementing the design, Model Sim for simulation, Quatres II is used to synthesize and obtain details about power and area.

  20. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  1. The Palliative Outcome Scale (POS) applied to clinical practice and research: an integrative review.

    Science.gov (United States)

    Rugno, Fernanda Capella; Carlo, Marysia Mara Rodrigues do Prado De

    2016-08-15

    to identify and evaluate the evidence found in the international scientific literature on the application of the Palliative Outcome Scale (POS) in clinical practice and research in Palliative Care (PC). integrative literature review, through the search of publications in journals indexed in PubMed / MEDLINE, LILACS, SciELO and CINAHL databases, between the years 1999 and 2014. the final sample consisted of 11 articles. In the data analysis, the articles were classified into 2 units of analysis (studies using the POS as a resource in research and studies using the POS in clinical practice), in which the information was presented in the form of sub-themes related to publications of the selected studies, highlighting the synthesis of the results. POS emerged as an important tool for measuring outcomes to assess the quality of life of patients and families, of the quality of care provided and the PC service organization. The international scientific literature on the application of POS proved to be relevant to the advancement and consolidation of the field of knowledge related to PC. identificar e avaliar as evidências encontradas na literatura científica internacional, referentes à aplicação da Palliative Outcome Scale (POS) na prática clínica e nas pesquisas em Cuidados Paliativos (CPs). revisão integrativa da literatura, por meio da busca de publicações nos periódicos indexados nas bases de dados PubMed/MEDLINE, LILACS, SciELO e CINAHL, entre os anos de 1999 e 2014. a amostra final do estudo constituiu-se de 11 artigos. Na análise dos dados, os artigos foram classificados em 2 unidades de análise (estudos que utilizam a POS como recurso na pesquisa e estudos que utilizam a POS na prática clínica), nas quais as informações foram apresentadas na forma de subtemas referentes às publicações dos estudos selecionados, com destaque para a síntese dos resultados. a POS se destacou como um importante instrumento de medidas de resultados para a avalia

  2. LSI microprocessor circuit families based on integrated injection logic. Mikroprotsessornyye komplekty bis na osnove integral'noy inzhektsionnoy logiki

    Energy Technology Data Exchange (ETDEWEB)

    Borisov, V.S.; Vlasov, F.S.; Kaloshkin, E.P.; Serzhanovich, D.S.; Sukhoparov, A.I.

    1984-01-01

    Progress in developing microprocessor computer hardware is based on progress and improvement in systems engineering, circuit engineering and manufacturing process methods of design and development of large-scale integrated circuits (BIS). Development of these methods with widespread use of computer-aided design (CAD) systems has allowed developing 4- and 8-bit microprocessor families (MPK) of LSI circuits based on integrated injection logic (I/sup 2/L), characterized by relatively high speed and low dissipated power. The emergence of LSI and VLSI microprocessor circuits required computer system developers to make changes to theory and practice of computer system design. Progress in technology upset the established relation between hardware and software component development costs in systems being designed. A characteristic feature of using LSI circuits is also the necessity of building devices from standard modules with large functional complexity. The existing directions of forming compositions of LSI microprocessor families allow the system developer to choose a particular methodology of design, proceeding from the efficiency function and field of application of the system being designed. The efficiency of using microprocessor families is largely governed by the user's understanding in depth of the structure of LSI microprocessor family circuits and the features of using them to implement a broad class of computer devices and modules being developed. This book is devoted to solving this problem.

  3. Integration, Provenance, and Temporal Queries for Large-Scale Knowledge Bases

    OpenAIRE

    Gao, Shi

    2016-01-01

    Knowledge bases that summarize web information in RDF triples deliver many benefits, including support for natural language question answering and powerful structured queries that extract encyclopedic knowledge via SPARQL. Large scale knowledge bases grow rapidly in terms of scale and significance, and undergo frequent changes in both schema and content. Two critical problems have thus emerged: (i) how to support temporal queries that explore the history of knowledge bases or flash-back to th...

  4. Integrating cross-scale analysis in the spatial and temporal domains for classification of behavioral movement

    Directory of Open Access Journals (Sweden)

    Ali Soleymani

    2014-06-01

    Full Text Available Since various behavioral movement patterns are likely to be valid within different, unique ranges of spatial and temporal scales (e.g., instantaneous, diurnal, or seasonal with the corresponding spatial extents, a cross-scale approach is needed for accurate classification of behaviors expressed in movement. Here, we introduce a methodology for the characterization and classification of behavioral movement data that relies on computing and analyzing movement features jointly in both the spatial and temporal domains. The proposed methodology consists of three stages. In the first stage, focusing on the spatial domain, the underlying movement space is partitioned into several zonings that correspond to different spatial scales, and features related to movement are computed for each partitioning level. In the second stage, concentrating on the temporal domain, several movement parameters are computed from trajectories across a series of temporal windows of increasing sizes, yielding another set of input features for the classification. For both the spatial and the temporal domains, the ``reliable scale'' is determined by an automated procedure. This is the scale at which the best classification accuracy is achieved, using only spatial or temporal input features, respectively. The third stage takes the measures from the spatial and temporal domains of movement, computed at the corresponding reliable scales, as input features for behavioral classification. With a feature selection procedure, the most relevant features contributing to known behavioral states are extracted and used to learn a classification model. The potential of the proposed approach is demonstrated on a dataset of adult zebrafish (Danio rerio swimming movements in testing tanks, following exposure to different drug treatments. Our results show that behavioral classification accuracy greatly increases when firstly cross-scale analysis is used to determine the best analysis scale, and

  5. Bounds of Double Integral Dynamic Inequalities in Two Independent Variables on Time Scales

    Directory of Open Access Journals (Sweden)

    S. H. Saker

    2011-01-01

    Full Text Available Our aim in this paper is to establish some explicit bounds of the unknown function in a certain class of nonlinear dynamic inequalities in two independent variables on time scales which are unbounded above. These on the one hand generalize and on the other hand furnish a handy tool for the study of qualitative as well as quantitative properties of solutions of partial dynamic equations on time scales. Some examples are considered to demonstrate the applications of the results.

  6. Spatial data analysis and integration for regional-scale geothermal potential mapping, West Java, Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Carranza, Emmanuel John M.; Barritt, Sally D. [Department of Earth Systems Analysis, International Institute for Geo-information Science and Earth Observation (ITC), Enschede (Netherlands); Wibowo, Hendro; Sumintadireja, Prihadi [Laboratory of Volcanology and Geothermal, Geology Department, Institute of Technology Bandung (ITB), Bandung (Indonesia)

    2008-06-15

    Conceptual modeling and predictive mapping of potential for geothermal resources at the regional-scale in West Java are supported by analysis of the spatial distribution of geothermal prospects and thermal springs, and their spatial associations with geologic features derived from publicly available regional-scale spatial data sets. Fry analysis shows that geothermal occurrences have regional-scale spatial distributions that are related to Quaternary volcanic centers and shallow earthquake epicenters. Spatial frequency distribution analysis shows that geothermal occurrences have strong positive spatial associations with Quaternary volcanic centers, Quaternary volcanic rocks, quasi-gravity lows, and NE-, NNW-, WNW-trending faults. These geological features, with their strong positive spatial associations with geothermal occurrences, constitute spatial recognition criteria of regional-scale geothermal potential in a study area. Application of data-driven evidential belief functions in GIS-based predictive mapping of regional-scale geothermal potential resulted in delineation of high potential zones occupying 25% of West Java, which is a substantial reduction of the search area for further exploration of geothermal resources. The predicted high potential zones delineate about 53-58% of the training geothermal areas and 94% of the validated geothermal occurrences. The results of this study demonstrate the value of regional-scale geothermal potential mapping in: (a) data-poor situations, such as West Java, and (b) regions with geotectonic environments similar to the study area. (author)

  7. Scaling options for integral experiments for molten salt fluid mechanics and heat transfer

    International Nuclear Information System (INIS)

    Philippe Bardet; Per F Peterson

    2005-01-01

    Full text of publication follows: Molten fluoride salts have potentially large benefits for use in high-temperature heat transport in fission and fusion energy systems, due to their very very low vapor pressures at high temperatures. Molten salts have high volumetric heat capacity compared to high-pressure helium and liquid metals, and have desirable safety characteristics due to their chemical inertness and low pressure. Therefore molten salts have been studied extensively for use in fusion blankets, as an intermediate heat transfer fluid for thermochemical hydrogen production in the Next Generation Nuclear Plant, as a primary coolant for the Advanced High Temperature Reactor, and as a solvent for fuel in the Molten Salt Reactor. This paper presents recent progress in the design and analysis of scaled thermal hydraulics experiments for molten salt systems. We have identified a category of light mineral oils that can be used for scaled experiments. By adjusting the length, velocity, average temperature, and temperature difference scales of the experiment, we show that it is possible to simultaneously match the Reynolds (Re), Froude (Fr), Prandtl (Pr) and Rayleigh (Ra) numbers in the scaled experiments. For example, the light mineral oil Penreco Drakesol 260 AT can be used to simulate the molten salt flibe (Li 2 BeF 4 ). At 110 deg. C, the oil Pr matches 600 deg. C flibe, and at 165 deg. C, the oil Pr matches 900 deg. C flibe. Re, Fr, and Ra can then be matched at a length scale of Ls/Lp = 0.40, velocity scale of U s /U p = 0.63, and temperature difference scale of ΔT s /ΔT p = 0.29. The Weber number is then matched within a factor of two, We s /We p = 0.7. Mechanical pumping power scales as Qp s /Qp p = 0.016, while heat inputs scale as Qh s /Qh p = 0.010, showing that power inputs to scaled experiments are very small compared to the prototype system. The scaled system has accelerated time, t s /t p = 0.64. When Re, Fr, Pr and Ra are matched, geometrically scaled

  8. Design of 10Gbps optical encoder/decoder structure for FE-OCDMA system using SOA and opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Hwang, Seow; Alameh, Kamal

    2008-01-21

    In this paper we propose and experimentally demonstrate a reconfigurable 10Gbps frequency-encoded (1D) encoder/decoder structure for optical code division multiple access (OCDMA). The encoder is constructed using a single semiconductor optical amplifier (SOA) and 1D reflective Opto-VLSI processor. The SOA generates broadband amplified spontaneous emission that is dynamically sliced using digital phase holograms loaded onto the Opto-VLSI processor to generate 1D codewords. The selected wavelengths are injected back into the same SOA for amplifications. The decoder is constructed using single Opto-VLSI processor only. The encoded signal can successfully be retrieved at the decoder side only when the digital phase holograms of the encoder and the decoder are matched. The system performance is measured in terms of the auto-correlation and cross-correlation functions as well as the eye diagram.

  9. Atypical language laterality is associated with large-scale disruption of network integration in children with intractable focal epilepsy.

    Science.gov (United States)

    Ibrahim, George M; Morgan, Benjamin R; Doesburg, Sam M; Taylor, Margot J; Pang, Elizabeth W; Donner, Elizabeth; Go, Cristina Y; Rutka, James T; Snead, O Carter

    2015-04-01

    Epilepsy is associated with disruption of integration in distributed networks, together with altered localization for functions such as expressive language. The relation between atypical network connectivity and altered localization is unknown. In the current study we tested whether atypical expressive language laterality was associated with the alteration of large-scale network integration in children with medically-intractable localization-related epilepsy (LRE). Twenty-three right-handed children (age range 8-17) with medically-intractable LRE performed a verb generation task in fMRI. Language network activation was identified and the Laterality index (LI) was calculated within the pars triangularis and pars opercularis. Resting-state data from the same cohort were subjected to independent component analysis. Dual regression was used to identify associations between resting-state integration and LI values. Higher positive values of the LI, indicating typical language localization were associated with stronger functional integration of various networks including the default mode network (DMN). The normally symmetric resting-state networks showed a pattern of lateralized connectivity mirroring that of language function. The association between atypical language localization and network integration implies a widespread disruption of neural network development. These findings may inform the interpretation of localization studies by providing novel insights into reorganization of neural networks in epilepsy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. pySecDec: A toolbox for the numerical evaluation of multi-scale integrals

    Science.gov (United States)

    Borowka, S.; Heinrich, G.; Jahn, S.; Jones, S. P.; Kerner, M.; Schlenk, J.; Zirke, T.

    2018-01-01

    We present pySECDEC, a new version of the program SECDEC, which performs the factorization of dimensionally regulated poles in parametric integrals, and the subsequent numerical evaluation of the finite coefficients. The algebraic part of the program is now written in the form of python modules, which allow a very flexible usage. The optimization of the C++ code, generated using FORM, is improved, leading to a faster numerical convergence. The new version also creates a library of the integrand functions, such that it can be linked to user-specific codes for the evaluation of matrix elements in a way similar to analytic integral libraries.

  11. Integration

    DEFF Research Database (Denmark)

    Emerek, Ruth

    2004-01-01

    Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration......Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration...

  12. Bioprocess scale-up/down as integrative enabling technology: from fluid mechanics to systems biology and beyond.

    Science.gov (United States)

    Delvigne, Frank; Takors, Ralf; Mudde, Rob; van Gulik, Walter; Noorman, Henk

    2017-09-01

    Efficient optimization of microbial processes is a critical issue for achieving a number of sustainable development goals, considering the impact of microbial biotechnology in agrofood, environment, biopharmaceutical and chemical industries. Many of these applications require scale-up after proof of concept. However, the behaviour of microbial systems remains unpredictable (at least partially) when shifting from laboratory-scale to industrial conditions. The need for robust microbial systems is thus highly needed in this context, as well as a better understanding of the interactions between fluid mechanics and cell physiology. For that purpose, a full scale-up/down computational framework is already available. This framework links computational fluid dynamics (CFD), metabolic flux analysis and agent-based modelling (ABM) for a better understanding of the cell lifelines in a heterogeneous environment. Ultimately, this framework can be used for the design of scale-down simulators and/or metabolically engineered cells able to cope with environmental fluctuations typically found in large-scale bioreactors. However, this framework still needs some refinements, such as a better integration of gas-liquid flows in CFD, and taking into account intrinsic biological noise in ABM. © 2017 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  13. Concepts: Integrating population survey data from different spatial scales, sampling methods, and species

    Science.gov (United States)

    Dorazio, Robert; Delampady, Mohan; Dey, Soumen; Gopalaswamy, Arjun M.; Karanth, K. Ullas; Nichols, James D.

    2017-01-01

    Conservationists and managers are continually under pressure from the public, the media, and political policy makers to provide “tiger numbers,” not just for protected reserves, but also for large spatial scales, including landscapes, regions, states, nations, and even globally. Estimating the abundance of tigers within relatively small areas (e.g., protected reserves) is becoming increasingly tractable (see Chaps. 9 and 10), but doing so for larger spatial scales still presents a formidable challenge. Those who seek “tiger numbers” are often not satisfied by estimates of tiger occupancy alone, regardless of the reliability of the estimates (see Chaps. 4 and 5). As a result, wherever tiger conservation efforts are underway, either substantially or nominally, scientists and managers are frequently asked to provide putative large-scale tiger numbers based either on a total count or on an extrapolation of some sort (see Chaps. 1 and 2).

  14. Building capacity for in-situ phenological observation data to support integrated biodiversity information at local to national scales

    Science.gov (United States)

    Weltzin, J. F.

    2016-12-01

    Earth observations from a variety of platforms and across a range of scales are required to support research, natural resource management, and policy- and decision-making in a changing world. Integrated earth observation data provides multi-faceted information critical to decision support, vulnerability and change detection, risk assessments, early warning and modeling, simulation and forecasting in the natural resource societal benefit area. The USA National Phenology Network (USA-NPN; www.usanpn.org) is a national-scale science and monitoring initiative focused on phenology - the study of seasonal life-cycle events such as leafing, flowering, reproduction, and migration - as a tool to understand the response of biodiversity to environmental variation and change. USA-NPN provides a hierarchical, national monitoring framework that enables other organizations to leverage the capacity of the Network for their own applications - minimizing investment and duplication of effort - while promoting interoperability and sustainability. Over the last decade, the network has focused on the development of a centralized database for in-situ (ground based) observations of plants and animals, now with 8 M records for the period 1954-present. More recently, we have developed a workflow for the production and validation of spatially gridded phenology products based on models that couple the organismal data with climatological and meteorological data at daily time-steps and relatively fine spatial resolutions ( 2.5 km to 4 km). These gridded data are now ripe for integration with other modeled or earth observation gridded data, e.g., indices of drought impact or land surface reflectance. This greatly broadens capacity to scale organismal observational data to landscapes and regions, and enables novel investigations of biophysical interactions at unprecedented scales, e.g., continental-scale migrations. Sustainability emerges from identification of stakeholder needs, segmentation of

  15. Bringing ISFM to scale through an integrated farm planning approach: a case study from Burundi

    NARCIS (Netherlands)

    Kessler, A.; Duivenbooden, van N.; Nsabimana, F.; Beek, van C.L.

    2016-01-01

    Integrated soil fertility management (ISFM) is generally accepted as the most relevant paradigm for soil fertility improvement in the tropics. Successes however are mainly reported at plot level, while real impact at farm level and beyond remains scattered. As a consequence, many Sub-Saharan African

  16. Assessing Pre-Service Teacher Attitudes and Skills with the Technology Integration Confidence Scale

    Science.gov (United States)

    Browne, Jeremy

    2009-01-01

    As technology integration continues to gain importance, preservice teachers must develop higher levels of confidence and proficiency in using technology in their classrooms (Kay, 2006). The acceptance of the National Educational Technology Standards for Teachers (NETS-T) by National Council for Accreditation of Teacher Education (NCATE) has…

  17. Integration of wide scale renewable resources into the power delivery system

    International Nuclear Information System (INIS)

    2009-01-01

    The CD includes the 60 papers presented and discussed, which cover the following: - National experiences with wind power; - Impact of wind generation on planning; - Rules for connection of wind generation; grid codes; - Impact on operation: Forecasting wind generation; Stability, control; - Research, fields and labs; Modelling and simulation; Micro-grids; - Economics on integrating renewables and other general issues

  18. Integrating scientific knowledge into large-scale restoration programs: the CALFED Bay-Delta Program experience

    Science.gov (United States)

    Taylor, K.A.; Short, A.

    2009-01-01

    Integrating science into resource management activities is a goal of the CALFED Bay-Delta Program, a multi-agency effort to address water supply reliability, ecological condition, drinking water quality, and levees in the Sacramento-San Joaquin Delta of northern California. Under CALFED, many different strategies were used to integrate science, including interaction between the research and management communities, public dialogues about scientific work, and peer review. This paper explores ways science was (and was not) integrated into CALFED's management actions and decision systems through three narratives describing different patterns of scientific integration and application in CALFED. Though a collaborative process and certain organizational conditions may be necessary for developing new understandings of the system of interest, we find that those factors are not sufficient for translating that knowledge into management actions and decision systems. We suggest that the application of knowledge may be facilitated or hindered by (1) differences in the objectives, approaches, and cultures of scientists operating in the research community and those operating in the management community and (2) other factors external to the collaborative process and organization.

  19. Integrated analysis of the effects of agricultural management on nitrogen fluxes at landscape scale

    NARCIS (Netherlands)

    Kros, J.; Frumeau, K.F.A.; Hensen, A.; Vries, de W.

    2011-01-01

    The integrated modelling system INITIATOR was applied to a landscape in the northern part of the Netherlands to assess current nitrogen fluxes to air and water and the impact of various agricultural measures on these fluxes, using spatially explicit input data on animal numbers, land use,

  20. Data-driven integration of genome-scale regulatory and metabolic network models

    Science.gov (United States)

    Imam, Saheed; Schäuble, Sascha; Brooks, Aaron N.; Baliga, Nitin S.; Price, Nathan D.

    2015-01-01

    Microbes are diverse and extremely versatile organisms that play vital roles in all ecological niches. Understanding and harnessing microbial systems will be key to the sustainability of our planet. One approach to improving our knowledge of microbial processes is through data-driven and mechanism-informed computational modeling. Individual models of biological networks (such as metabolism, transcription, and signaling) have played pivotal roles in driving microbial research through the years. These networks, however, are highly interconnected and function in concert—a fact that has led to the development of a variety of approaches aimed at simulating the integrated functions of two or more network types. Though the task of integrating these different models is fraught with new challenges, the large amounts of high-throughput data sets being generated, and algorithms being developed, means that the time is at hand for concerted efforts to build integrated regulatory-metabolic networks in a data-driven fashion. In this perspective, we review current approaches for constructing integrated regulatory-metabolic models and outline new strategies for future development of these network models for any microbial system. PMID:25999934

  1. The application of J integral to measure cohesive laws under large-scale yielding

    DEFF Research Database (Denmark)

    Goutianos, Stergios; Sørensen, Bent F.

    2016-01-01

    A method is developed to obtain the mode I cohesive law of elastic-plastic materials using a Double Cantilever Beam sandwich specimen loaded with pure bending moments. The approach is based on the validity of the J integral for materials having a non-linear stress-strain relationship without...

  2. Developing a Scale for Teacher Integration of Information and Communication Technology in Grades 1-9

    Science.gov (United States)

    Hsu, S.

    2010-01-01

    There is no unified view about how teachers' integration of information and communication technology (ICT) should be measured. While many instruments have focused on the technological aspects, recent studies have suggested teachers' pedagogical considerations, professional development, and emerging ethical and safety issues should be included when…

  3. Data-driven integration of genome-scale regulatory and metabolic network models

    Directory of Open Access Journals (Sweden)

    Saheed eImam

    2015-05-01

    Full Text Available Microbes are diverse and extremely versatile organisms that play vital roles in all ecological niches. Understanding and harnessing microbial systems will be key to the sustainability of our planet. One approach to improving our knowledge of microbial processes is through data-driven and mechanism-informed computational modeling. Individual models of biological networks (such as metabolism, transcription and signaling have played pivotal roles in driving microbial research through the years. These networks, however, are highly interconnected and function in concert – a fact that has led to the development of a variety of approaches aimed at simulating the integrated functions of two or more network types. Though the task of integrating these different models is fraught with new challenges, the large amounts of high-throughput data sets being generated, and algorithms being developed, means that the time is at hand for concerted efforts to build integrated regulatory-metabolic networks in a data-driven fashion. In this perspective, we review current approaches for constructing integrated regulatory-metabolic models and outline new strategies for future development of these network models for any microbial system.

  4. 75 FR 24742 - In the Matter of Certain Large Scale Integrated Circuit Semiconductor Chips and Products...

    Science.gov (United States)

    2010-05-05

    ... Semiconductor, Xiqing Integrated Semiconductor, Manufacturing Site, No. 15 Xinghua Road, Xiqing Economic... Malaysia Sdn. Bhd., NO. 2 Jalan SS 8/2, Free Industrial Zone, Sungai Way, 47300 Petaling Jaya, Selengor, Malaysia. Freescale Semiconductor Pte. Ltd., 7 Changi South Street 2, 03-00, Singapore 486415. Freescale...

  5. Integrating science into governance and management of coastal areas at urban scale

    CSIR Research Space (South Africa)

    Celliers, Louis

    2012-10-01

    Full Text Available and development planning (CSDP) is no longer an option but a necessity. Current legislation devolves many fine scale planning and management functions within coastal urban centres to local authorities, including land-use and urban and economic development... into governance and management of coastal areas at urban scale L CELLIERS, S TALJAARD AND R VAN BALLEGOOYEN CSIR, PO Box 395, Pretoria, South Africa, 0001 Email: lcelliers@csir.co.za ? www.csir.co.za BACKGROUND With burgeoning demand for coastal space...

  6. Integrating weather and geotechnical monitoring data for assessing the stability of large scale surface mining operations

    Directory of Open Access Journals (Sweden)

    Steiakakis Chrysanthos

    2016-01-01

    Full Text Available The geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions.

  7. Physico-topological methods of increasing stability of the VLSI circuit components to irradiation. Fiziko-topologhicheskie sposoby uluchsheniya radiatsionnoj stojkosti komponentov BIS

    Energy Technology Data Exchange (ETDEWEB)

    Pereshenkov, V S [MIFI, Moscow, (Russian Federation); Shishianu, F S; Rusanovskij, V I [S. Lazo KPI, Chisinau, (Moldova, Republic of)

    1992-01-01

    The paper presents the method used and the experimental results obtained for 8-bit microprocessor irradiated with [gamma]-rays and neutrons. The correlation between the electrical and technological parameters with the irradiation ones is revealed. The influence of leakage current between devices incorporated in VLSI circuits was studied. The obtained results create the possibility to determine the technological parameters necessary for designing the circuit able to work at predetermined doses. The necessary substrate doping concentration for isolation which eliminates the leakage current between devices prevents the VLSI circuit break down was determined. (Author).

  8. Operation strategy for a lab-scale grid-connected photovoltaic generation system integrated with battery energy storage

    International Nuclear Information System (INIS)

    Jou, Hurng-Liahng; Chang, Yi-Hao; Wu, Jinn-Chang; Wu, Kuen-Der

    2015-01-01

    Highlights: • The operation strategy for grid-connected PV generation system integrated with battery energy storage is proposed. • The PV system is composed of an inverter and two DC-DC converter. • The negative impact of grid-connected PV generation systems on the grid can be alleviated by integrating a battery. • The operation of the developed system can be divided into nine modes. - Abstract: The operation strategy for a lab-scale grid-connected photovoltaic generation system integrated with battery energy storage is proposed in this paper. The photovoltaic generation system is composed of a full-bridge inverter, a DC–DC boost converter, an isolated bidirectional DC–DC converter, a solar cell array and a battery set. Since the battery set acts as an energy buffer to adjust the power generation of the solar cell array, the negative impact on power quality caused by the intermittent and unstable output power from a solar cell array is alleviated, so the penetration rate of the grid-connected photovoltaic generation system is increased. A lab-scale prototype is developed to verify the performance of the system. The experimental results show that it achieves the expected performance

  9. Ultra-Fine Scale Spatially-Integrated Mapping of Habitat and Occupancy Using Structure-From-Motion.

    Directory of Open Access Journals (Sweden)

    Philip McDowall

    Full Text Available Organisms respond to and often simultaneously modify their environment. While these interactions are apparent at the landscape extent, the driving mechanisms often occur at very fine spatial scales. Structure-from-Motion (SfM, a computer vision technique, allows the simultaneous mapping of organisms and fine scale habitat, and will greatly improve our understanding of habitat suitability, ecophysiology, and the bi-directional relationship between geomorphology and habitat use. SfM can be used to create high-resolution (centimeter-scale three-dimensional (3D habitat models at low cost. These models can capture the abiotic conditions formed by terrain and simultaneously record the position of individual organisms within that terrain. While coloniality is common in seabird species, we have a poor understanding of the extent to which dense breeding aggregations are driven by fine-scale active aggregation or limited suitable habitat. We demonstrate the use of SfM for fine-scale habitat suitability by reconstructing the locations of nests in a gentoo penguin colony and fitting models that explicitly account for conspecific attraction. The resulting digital elevation models (DEMs are used as covariates in an inhomogeneous hybrid point process model. We find that gentoo penguin nest site selection is a function of the topography of the landscape, but that nests are far more aggregated than would be expected based on terrain alone, suggesting a strong role of behavioral aggregation in driving coloniality in this species. This integrated mapping of organisms and fine scale habitat will greatly improve our understanding of fine-scale habitat suitability, ecophysiology, and the complex bi-directional relationship between geomorphology and habitat use.

  10. Prospective and participatory integrated assessment of agricultural systems from farm to regional scales: Comparison of three modeling approaches.

    Science.gov (United States)

    Delmotte, Sylvestre; Lopez-Ridaura, Santiago; Barbier, Jean-Marc; Wery, Jacques

    2013-11-15

    Evaluating the impacts of the development of alternative agricultural systems, such as organic or low-input cropping systems, in the context of an agricultural region requires the use of specific tools and methodologies. They should allow a prospective (using scenarios), multi-scale (taking into account the field, farm and regional level), integrated (notably multicriteria) and participatory assessment, abbreviated PIAAS (for Participatory Integrated Assessment of Agricultural System). In this paper, we compare the possible contribution to PIAAS of three modeling approaches i.e. Bio-Economic Modeling (BEM), Agent-Based Modeling (ABM) and statistical Land-Use/Land Cover Change (LUCC) models. After a presentation of each approach, we analyze their advantages and drawbacks, and identify their possible complementarities for PIAAS. Statistical LUCC modeling is a suitable approach for multi-scale analysis of past changes and can be used to start discussion about the futures with stakeholders. BEM and ABM approaches have complementary features for scenarios assessment at different scales. While ABM has been widely used for participatory assessment, BEM has been rarely used satisfactorily in a participatory manner. On the basis of these results, we propose to combine these three approaches in a framework targeted to PIAAS. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. ARRA-Multi-Level Energy Storage and Controls for Large-Scale Wind Energy Integration

    Energy Technology Data Exchange (ETDEWEB)

    David Wenzhong Gao

    2012-09-30

    The Project Objective is to design innovative energy storage architecture and associated controls for high wind penetration to increase reliability and market acceptance of wind power. The project goals are to facilitate wind energy integration at different levels by design and control of suitable energy storage systems. The three levels of wind power system are: Balancing Control Center level, Wind Power Plant level, and Wind Power Generator level. Our scopes are to smooth the wind power fluctuation and also ensure adequate battery life. In the new hybrid energy storage system (HESS) design for wind power generation application, the boundary levels of the state of charge of the battery and that of the supercapacitor are used in the control strategy. In the controller, some logic gates are also used to control the operating time durations of the battery. The sizing method is based on the average fluctuation of wind profiles of a specific wind station. The calculated battery size is dependent on the size of the supercapacitor, state of charge of the supercapacitor and battery wear. To accommodate the wind power fluctuation, a hybrid energy storage system (HESS) consisting of battery energy system (BESS) and super-capacitor is adopted in this project. A probability-based power capacity specification approach for the BESS and super-capacitors is proposed. Through this method the capacities of BESS and super-capacitor are properly designed to combine the characteristics of high energy density of BESS and the characteristics of high power density of super-capacitor. It turns out that the super-capacitor within HESS deals with the high power fluctuations, which contributes to the extension of BESS lifetime, and the super-capacitor can handle the peaks in wind power fluctuations without the severe penalty of round trip losses associated with a BESS. The proposed approach has been verified based on the real wind data from an existing wind power plant in Iowa. An

  12. Integrated watershed- and farm-scale modeling framework for targeting critical source areas while maintaining farm economic viability.

    Science.gov (United States)

    Ghebremichael, Lula T; Veith, Tamie L; Hamlett, James M

    2013-01-15

    Quantitative risk assessments of pollution and data related to the effectiveness of mitigating best management practices (BMPs) are important aspects of nonpoint source pollution control efforts, particularly those driven by specific water quality objectives and by measurable improvement goals, such as the total maximum daily load (TMDL) requirements. Targeting critical source areas (CSAs) that generate disproportionately high pollutant loads within a watershed is a crucial step in successfully controlling nonpoint source pollution. The importance of watershed simulation models in assisting with the quantitative assessments of CSAs of pollution (relative to their magnitudes and extents) and of the effectiveness of associated BMPs has been well recognized. However, due to the distinct disconnect between the hydrological scale in which these models conduct their evaluation and the farm scale at which feasible BMPs are actually selected and implemented, and due to the difficulty and uncertainty involved in transferring watershed model data to farm fields, there are limited practical applications of these tools in the current nonpoint source pollution control efforts by conservation specialists for delineating CSAs and planning targeting measures. There are also limited approaches developed that can assess impacts of CSA-targeted BMPs on farm productivity and profitability together with the assessment of water quality improvements expected from applying these measures. This study developed a modeling framework that integrates farm economics and environmental aspects (such as identification and mitigation of CSAs) through joint use of watershed- and farm-scale models in a closed feedback loop. The integration of models in a closed feedback loop provides a way for environmental changes to be evaluated with regard to the impact on the practical aspects of farm management and economics, adjusted or reformulated as necessary, and revaluated with respect to effectiveness of

  13. Multi-Scale Modeling of an Integrated 3D Braided Composite with Applications to Helicopter Arm

    Science.gov (United States)

    Zhang, Diantang; Chen, Li; Sun, Ying; Zhang, Yifan; Qian, Kun

    2017-10-01

    A study is conducted with the aim of developing multi-scale analytical method for designing the composite helicopter arm with three-dimensional (3D) five-directional braided structure. Based on the analysis of 3D braided microstructure, the multi-scale finite element modeling is developed. Finite element analysis on the load capacity of 3D five-directional braided composites helicopter arm is carried out using the software ABAQUS/Standard. The influences of the braiding angle and loading condition on the stress and strain distribution of the helicopter arm are simulated. The results show that the proposed multi-scale method is capable of accurately predicting the mechanical properties of 3D braided composites, validated by the comparison the stress-strain curves of meso-scale RVCs. Furthermore, it is found that the braiding angle is an important factor affecting the mechanical properties of 3D five-directional braided composite helicopter arm. Based on the optimized structure parameters, the nearly net-shaped composite helicopter arm is fabricated using a novel resin transfer mould (RTM) process.

  14. Integral large scale experiments on hydrogen combustion for severe accident code validation-HYCOM

    International Nuclear Information System (INIS)

    Breitung, W.; Dorofeev, S.; Kotchourko, A.; Redlinger, R.; Scholtyssek, W.; Bentaib, A.; L'Heriteau, J.-P.; Pailhories, P.; Eyink, J.; Movahed, M.; Petzold, K.-G.; Heitsch, M.; Alekseev, V.; Denkevits, A.; Kuznetsov, M.; Efimenko, A.; Okun, M.V.; Huld, T.; Baraldi, D.

    2005-01-01

    A joint research project was carried out in the EU Fifth Framework Programme, concerning hydrogen risk in a nuclear power plant. The goals were: Firstly, to create a new data base of results on hydrogen combustion experiments in the slow to turbulent combustion regimes. Secondly, to validate the partners CFD and lumped parameter codes on the experimental data, and to evaluate suitable parameter sets for application calculations. Thirdly, to conduct a benchmark exercise by applying the codes to the full scale analysis of a postulated hydrogen combustion scenario in a light water reactor containment after a core melt accident. The paper describes the work programme of the project and the partners activities. Significant progress has been made in the experimental area, where test series in medium and large scale facilities have been carried out with the focus on specific effects of scale, multi-compartent geometry, heat losses and venting. The data were used for the validation of the partners CFD and lumped parameter codes, which included blind predictive calculations and pre- and post-test intercomparison exercises. Finally, a benchmark exercise was conducted by applying the codes to the full scale analysis of a hydrogen combustion scenario. The comparison and assessment of the results of the validation phase and of the challenging containment calculation exercise allows a deep insight in the quality, capabilities and limits of the CFD and the lumped parameter tools which are currently in use at various research laboratories

  15. Initial Economic Analysis of Utility-Scale Wind Integration in Hawaii

    Energy Technology Data Exchange (ETDEWEB)

    2012-03-01

    This report summarizes an analysis, conducted by the National Renewable Energy Laboratory (NREL) in May 2010, of the economic characteristics of a particular utility-scale wind configuration project that has been referred to as the 'Big Wind' project.

  16. BPS ZN string tensions, sine law and Casimir scaling, and integrable field theories

    International Nuclear Information System (INIS)

    Kneipp, Marco A. C.

    2007-01-01

    We consider a Yang-Mills-Higgs theory with spontaneous symmetry breaking of the gauge group G→U(1) r →C G , with C G being the center of G. We study two vacua solutions of the theory which produce this symmetry breaking. We show that for one of these vacua, the theory in the Coulomb phase has the mass spectrum of particles and monopoles which is exactly the same as the mass spectrum of particles and solitons of two-dimensional affine Toda field theory, for suitable coupling constants. That result holds also for N=4 super Yang-Mills theories. On the other hand, in the Higgs phase, we show that for each of the two vacua the ratio of the tensions of the BPS Z N strings satisfy either the Casimir scaling or the sine law scaling for G=SU(N). These results are extended to other gauge groups: for the Casimir scaling, the ratios of the tensions are equal to the ratios of the quadratic Casimir constant of specific representations; for the sine law scaling, the tensions are proportional to the components of the left Perron-Frobenius eigenvector of Cartan matrix K ij and the ratios of tensions are equal to the ratios of the soliton masses of affine Toda field theories

  17. Scaling up Sexuality Education in Senegal: Integrating Family Life Education into the National Curriculum

    Science.gov (United States)

    Chau, Katie; Traoré Seck, Aminata; Chandra-Mouli, Venkatraman; Svanemyr, Joar

    2016-01-01

    In Senegal, school-based sexuality education has evolved over 20 years from family life education (FLE) pilot projects into cross-curricular subjects located within the national curriculum of primary and secondary schools. We conducted a literature review and semi-structured interviews to gather information regarding the scale and nature of FLE…

  18. Review of broad-scale drought monitoring of forests: Toward an integrated data mining approach

    Science.gov (United States)

    Steve Norman; Frank H. Koch; William W. Hargrove

    2016-01-01

    Efforts to monitor the broad-scale impacts of drought on forests often come up short. Drought is a direct stressor of forests as well as a driver of secondary disturbance agents, making a full accounting of drought impacts challenging. General impacts  can be inferred from moisture deficits quantified using precipitation and temperature measurements. However,...

  19. Ecological hierarchies and self-organisation - Pattern analysis, modelling and process integration across scales

    Science.gov (United States)

    Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.

    2010-01-01

    A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.

  20. Integrating habitat restoration and fisheries management : A small-scale case-study to support EEL conservation at the global scale

    Directory of Open Access Journals (Sweden)

    Ciccotti E.

    2013-02-01

    Full Text Available The aim of this work was to develop a methodological framework for the management of local eel stocks that integrates habitat restoration with optimal fishery management. The Bolsena lake (Viterbo, Italy and its emissary, the river Marta, were taken as a reference system. The river flows in the Mediterranean sea but its course is fragmented by a number of dams built in the past century preventing eel migration from and to the sea. Eel fishery in the Bolsena lake is thus sustained by periodic stocking of glass eels caught at the Marta river estuary. A detailed demographic model was applied to simulate fishery yields and potential spawner escapement under different recruitment and management scenarios. It was estimated that the high exploitation rates occurring in the nineties reduced the potential spawner escapement from the Bolsena lake to less than 1 t; under current harvesting rates, the potential spawner escapement is estimated in about 12 t while in pristine conditions (i.e. high recruitment and no fishing estimated spawner escapement is about 21 t. This analysis thus showed that current fishery management would comply with the 40% spawner escapement requirement of the EU regulation 1100/2007 if the connections between the Bolsena lake emissary and the sea were fully re-established. This confirms the opportunity of an integrated approach to management at the catchment area level scale for eel populations, that shall hopefully contribute to the conservation of the global stock.