WorldWideScience

Sample records for grid array process

  1. Column Grid Array Rework for High Reliability

    Science.gov (United States)

    Mehta, Atul C.; Bodie, Charles C.

    2008-01-01

    Due to requirements for reduced size and weight, use of grid array packages in space applications has become common place. To meet the requirement of high reliability and high number of I/Os, ceramic column grid array packages (CCGA) were selected for major electronic components used in next MARS Rover mission (specifically high density Field Programmable Gate Arrays). ABSTRACT The probability of removal and replacement of these devices on the actual flight printed wiring board assemblies is deemed to be very high because of last minute discoveries in final test which will dictate changes in the firmware. The questions and challenges presented to the manufacturing organizations engaged in the production of high reliability electronic assemblies are, Is the reliability of the PWBA adversely affected by rework (removal and replacement) of the CGA package? and How many times can we rework the same board without destroying a pad or degrading the lifetime of the assembly? To answer these questions, the most complex printed wiring board assembly used by the project was chosen to be used as the test vehicle, the PWB was modified to provide a daisy chain pattern, and a number of bare PWB s were acquired to this modified design. Non-functional 624 pin CGA packages with internal daisy chained matching the pattern on the PWB were procured. The combination of the modified PWB and the daisy chained packages enables continuity measurements of every soldered contact during subsequent testing and thermal cycling. Several test vehicles boards were assembled, reworked and then thermal cycled to assess the reliability of the solder joints and board material including pads and traces near the CGA. The details of rework process and results of thermal cycling are presented in this paper.

  2. Backshort-Under-Grid arrays for infrared astronomy

    Science.gov (United States)

    Allen, C. A.; Benford, D. J.; Chervenak, J. A.; Chuss, D. T.; Miller, T. M.; Moseley, S. H.; Staguhn, J. G.; Wollack, E. J.

    2006-04-01

    We are developing a kilopixel, filled bolometer array for space infrared astronomy. The array consists of three individual components, to be merged into a single, working unit; (1) a transition edge sensor bolometer array, operating in the milliKelvin regime, (2) a quarter-wave backshort grid, and (3) superconducting quantum interference device multiplexer readout. The detector array is designed as a filled, square grid of suspended, silicon bolometers with superconducting sensors. The backshort arrays are fabricated separately and will be positioned in the cavities created behind each detector during fabrication. The grids have a unique interlocking feature machined into the walls for positioning and mechanical stability. The spacing of the backshort beneath the detector grid can be set from ˜30 300 μm, by independently adjusting two process parameters during fabrication. The ultimate goal is to develop a large-format array architecture with background-limited sensitivity, suitable for a wide range of wavelengths and applications, to be directly bump bonded to a multiplexer circuit. We have produced prototype two-dimensional arrays having 8×8 detector elements. We present detector design, fabrication overview, and assembly technologies.

  3. 3D Printing of Ball Grid Arrays

    Science.gov (United States)

    Sinha, Shayandev; Hines, Daniel; Dasgupta, Abhijit; Das, Siddhartha

    Ball grid arrays (BGA) are interconnects between an integrated circuit (IC) and a printed circuit board (PCB), that are used for surface mounting electronic components. Typically, lead free alloys are used to make solder balls which, after a reflow process, establish a mechanical and electrical connection between the IC and the PCB. High temperature processing is required for most of these alloys leading to thermal shock causing damage to ICs. For producing flexible circuits on a polymer substrate, there is a requirement for low temperature processing capabilities (around 150 C) and for reducing strain from mechanical stresses. Additive manufacturing techniques can provide an alternative methodology for fabricating BGAs as a direct replacement for standard solder bumped BGAs. We have developed aerosol jet (AJ) printing methods to fabricate a polymer bumped BGA. As a demonstration of the process developed, a daisy chain test chip was polymer bumped using an AJ printed ultra violet (UV) curable polymer ink that was then coated with an AJ printed silver nanoparticle laden ink as a conducting layer printed over the polymer bump. The structure for the balls were achieved by printing the polymer ink using a specific toolpath coupled with in-situ UV curing of the polymer which provided good control over the shape, resulting in well-formed spherical bumps on the order of 200 um wide by 200 um tall for this initial demonstration. A detailed discussion of the AJ printing method and results from accelerated life-time testing will be presented

  4. Ceramic ball grid array package stress analysis

    Science.gov (United States)

    Badri, S. H. B. S.; Aziz, M. H. A.; Ong, N. R.; Sauli, Z.; Alcain, J. B.; Retnasamy, V.

    2017-09-01

    The ball grid array (BGA), a form of chip scale package (CSP), was developed as one of the most advanced surface mount devices, which may be assembled by an ordinary surface ball bumps are used instead of plated nickel and gold (Ni/Au) bumps. Assembly and reliability of the BGA's printed circuit board (PCB), which is soldered by conventional surface mount technology is considered in this study. The Ceramic Ball Grid Array (CBGA) is a rectangular ceramic package or square-shaped that will use the solder ball for external electrical connections instead of leads or wire for connections. The solder balls will be arranged in an array or grid at the bottom of the ceramic package body. In this study, ANSYS software is used to investigate the stress on the package for 2 balls and 4 balls of the CBGA package with the various force range of 1-3 Newton applied to the top of the die, top of the substrate and side of the substrate. The highest maximum stress was analyzed and the maximum equivalent stress was observed on the solder ball and the die. From the simulation result, the CBGA package with less solder balls experience higher stress compared to the package with many solder balls. Therefore, less number of solder ball on the CBGA package results higher stress and critically affect the reliability of the solder balls itself, substrate and die which can lead to the solder crack and also die crack.

  5. Far infrared through millimeter backshort-under-grid arrays

    Science.gov (United States)

    Allen, Christine A.; Abrahams, John; Benford, Dominic J.; Chervenak, James A.; Chuss, David T.; Staguhn, Johannes G.; Miller, Timothy M.; Moseley, S. Harvey; Wollack, Edward J.

    2006-06-01

    We are developing a large-format, versatile, bolometer array for a wide range of infrared through millimeter astronomical applications. The array design consists of three key components - superconducting transition edge sensor bolometer arrays, quarter-wave reflective backshort grids, and Superconducting Quantum Interference Device (SQUID) multiplexer readouts. The detector array is a filled, square grid of bolometers with superconducting sensors. The backshort arrays are fabricated separately and are positioned in the etch cavities behind the detector grid. The grids have unique three-dimensional interlocking features micromachined into the walls for positioning and mechanical stability. The ultimate goal of the program is to produce large-format arrays with background-limited sensitivity, suitable for a wide range of wavelengths and applications. Large-format (kilopixel) arrays will be directly indium bump bonded to a SQUID multiplexer circuit. We have produced and tested 8×8 arrays of 1 mm detectors to demonstrate proof of concept. 8×16 arrays of 2 mm detectors are being produced for a new Goddard Space Flight Center instrument. We have also produced models of a kilopixel detector grid and dummy multiplexer chip for bump bonding development. We present detector design overview, several unique fabrication highlights, and assembly technologies.

  6. Sensor array signal processing

    CERN Document Server

    Naidu, Prabhakar S

    2009-01-01

    Chapter One: An Overview of Wavefields 1.1 Types of Wavefields and the Governing Equations 1.2 Wavefield in open space 1.3 Wavefield in bounded space 1.4 Stochastic wavefield 1.5 Multipath propagation 1.6 Propagation through random medium 1.7 ExercisesChapter Two: Sensor Array Systems 2.1 Uniform linear array (ULA) 2.2 Planar array 2.3 Distributed sensor array 2.4 Broadband sensor array 2.5 Source and sensor arrays 2.6 Multi-component sensor array2.7 ExercisesChapter Three: Frequency Wavenumber Processing 3.1 Digital filters in the w-k domain 3.2 Mapping of 1D into 2D filters 3.3 Multichannel Wiener filters 3.4 Wiener filters for ULA and UCA 3.5 Predictive noise cancellation 3.6 Exercises Chapter Four: Source Localization: Frequency Wavenumber Spectrum4.1 Frequency wavenumber spectrum 4.2 Beamformation 4.3 Capon's w-k spectrum 4.4 Maximum entropy w-k spectrum 4.5 Doppler-Azimuth Processing4.6 ExercisesChapter Five: Source Localization: Subspace Methods 5.1 Subspace methods (Narrowband) 5.2 Subspace methods (B...

  7. Model Building of Photovoltaic Array with MPPT Function and Research on Single Phase Grid Connected

    Directory of Open Access Journals (Sweden)

    Li Zhengzhou

    2016-01-01

    Full Text Available With the continued development of solar photovoltaic technology, research on distributed grid connected photovoltaic system has become a research focus in the field of photovoltaic grid power plant and the computer simulation technology is an effective technology means in the study. On the basis of the photovoltaic array output characteristic equation, the photovoltaic array maximum power control simulation model based on M function is established by using MATLAB/Simulink and the simulation model of single phase grid connected photovoltaic array is proposed. It overcomes the shortcomings of the process of building the model of the PV array by using Simulink component library and provides the basic guarantee for the realization of system simulation, guiding theory research and system design.

  8. Laser-induced superhydrophobic grid patterns on PDMS for droplet arrays formation

    Energy Technology Data Exchange (ETDEWEB)

    Farshchian, Bahador [Ingram School of Engineering, Texas State University, San Marcos, TX 78666 (United States); Gatabi, Javad R. [Materials Science, Engineering and Commercialization, Texas State University, San Marcos, TX 78666 (United States); Bernick, Steven M.; Park, Sooyeon [Ingram School of Engineering, Texas State University, San Marcos, TX 78666 (United States); Lee, Gwan-Hyoung [Department of Materials Science and Engineering, Yonsei University, Seoul 03722 (Korea, Republic of); Droopad, Ravindranath [Ingram School of Engineering, Texas State University, San Marcos, TX 78666 (United States); Materials Science, Engineering and Commercialization, Texas State University, San Marcos, TX 78666 (United States); Kim, Namwon, E-mail: n_k43@txstate.edu [Ingram School of Engineering, Texas State University, San Marcos, TX 78666 (United States)

    2017-02-28

    Highlights: • Superhydrophobic grid patterns were processed on the surface of PDMS using a pulsed nanosecond laser. • Droplet arrays form instantly on the laser-patterned PDMS with the superhydrophobic grid pattern when the PDMS sample is simply immersed in and withdrawn from water. • Droplet size can be controlled by controlling the pitch size of superhydrophobic grid and the withdrawal speed. - Abstract: We demonstrate a facile single step laser treatment process to render a polydimethylsiloxane (PDMS) surface superhydrophobic. By synchronizing a pulsed nanosecond laser source with a motorized stage, superhydrophobic grid patterns were written on the surface of PDMS. Hierarchical micro and nanostructures were formed in the irradiated areas while non-irradiated areas were covered by nanostructures due to deposition of ablated particles. Arrays of droplets form spontaneously on the laser-patterned PDMS with superhydrophobic grid pattern when the PDMS sample is simply immersed in and withdrawn from water due to different wetting properties of the irradiated and non-irradiated areas. The effects of withdrawal speed and pitch size of superhydrophobic grid on the size of formed droplets were investigated experimentally. The droplet size increases initially with increasing the withdrawal speed and then does not change significantly beyond certain points. Moreover, larger droplets are formed by increasing the pitch size of the superhydrophobic grid. The droplet arrays formed on the laser-patterned PDMS with wettability contrast can be used potentially for patterning of particles, chemicals, and bio-molecules and also for cell screening applications.

  9. Volumetric display using a roof mirror grid array

    Science.gov (United States)

    Miyazaki, Daisuke; Hirano, Noboru; Maeda, Yuuki; Ohno, Keisuke; Maekawa, Satoshi

    2010-02-01

    A volumetric display system using a roof mirror grid array (RMGA) is proposed. The RMGA consists of a two-dimensional array of dihedral corner reflectors and forms a real image at a plane-symmetric position. A two-dimensional image formed with a RMGA is moved at thigh speed by a mirror scanner. Cross-sectional images of a three-dimensional object are displayed in accordance with the position of the image plane. A volumetric image can be observed as a stack of the cross-sectional images by high-speed scanning. Image formation by a RMGA is free from aberrations. Moreover, a compact optical system can be constructed because a RMGA doesn't have a focal length. An experimental volumetric display system using a galvanometer mirror and a digital micromirror device was constructed. The formation of a three-dimensional image consisting of 1024 × 768 × 400 voxels is confirmed by the experimental system.

  10. Innovative Columnar Type of Grid Array SJ BIST HALT Method, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Ridgetop will develop a superior method for testing and qualifying columnar type of grid arrays such as field programmable gate arrays (FPGAs) packaged in column...

  11. Technical Note: Rapid prototyping of 3D grid arrays for image guided therapy quality assurance

    International Nuclear Information System (INIS)

    Kittle, David; Holshouser, Barbara; Slater, James M.; Guenther, Bob D.; Pitsianis, Nikos P.; Pearlstein, Robert D.

    2008-01-01

    Three dimensional grid phantoms offer a number of advantages for measuring imaging related spatial inaccuracies for image guided surgery and radiotherapy. The authors examined the use of rapid prototyping technology for directly fabricating 3D grid phantoms from CAD drawings. We tested three different fabrication process materials, photopolymer jet with acrylic resin (PJ/AR), selective laser sintering with polyamide (SLS/P), and fused deposition modeling with acrylonitrile butadiene styrene (FDM/ABS). The test objects consisted of rectangular arrays of control points formed by the intersections of posts and struts (2 mm rectangular cross section) and spaced 8 mm apart in the x, y, and z directions. The PJ/AR phantom expanded after immersion in water which resulted in permanent warping of the structure. The surface of the FDM/ABS grid exhibited a regular pattern of depressions and ridges from the extrusion process. SLS/P showed the best combination of build accuracy, surface finish, and stability. Based on these findings, a grid phantom for assessing machine-dependent and frame-induced MR spatial distortions was fabricated to be used for quality assurance in stereotactic neurosurgical and radiotherapy procedures. The spatial uniformity of the SLS/P grid control point array was determined by CT imaging (0.6x0.6x0.625 mm 3 resolution) and found suitable for the application, with over 97.5% of the control points located within 0.3 mm of the position specified in CAD drawing and none of the points off by more than 0.4 mm. Rapid prototyping is a flexible and cost effective alternative for development of customized grid phantoms for medical physics quality assurance.

  12. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  13. Real Time Photovoltaic Array Simulator for Testing Grid-Connected PV Inverters

    DEFF Research Database (Denmark)

    Sera, Dezso; Valentini, Massimo; Raducu, Alin

    2008-01-01

    In this paper a real time flexible PV array simulator is presented. It is a system that can simulate different PV panel arrays in specific environmental conditions. To evaluate performance of the Maximum Power Point Tracking (MPPT) of grid-connected Photovoltaic (PV) inverters only measurements...... undertaken with an appropriate PV array simulator provide accurate and reproducible results. Thus the PV array simulator has been developed and implemented. MPPT efficiency tests on a commercial grid-connected PV inverter have been performed to validate the PV array simulator....

  14. Ball-grid array architecture for microfabricated ion traps

    Science.gov (United States)

    Guise, Nicholas D.; Fallek, Spencer D.; Stevens, Kelly E.; Brown, K. R.; Volin, Curtis; Harter, Alexa W.; Amini, Jason M.; Higashi, Robert E.; Lu, Son Thai; Chanhvongsak, Helen M.; Nguyen, Thi A.; Marcus, Matthew S.; Ohnstein, Thomas R.; Youngner, Daniel W.

    2015-05-01

    State-of-the-art microfabricated ion traps for quantum information research are approaching nearly one hundred control electrodes. We report here on the development and testing of a new architecture for microfabricated ion traps, built around ball-grid array (BGA) connections, that is suitable for increasingly complex trap designs. In the BGA trap, through-substrate vias bring electrical signals from the back side of the trap die to the surface trap structure on the top side. Gold-ball bump bonds connect the back side of the trap die to an interposer for signal routing from the carrier. Trench capacitors fabricated into the trap die replace area-intensive surface or edge capacitors. Wirebonds in the BGA architecture are moved to the interposer. These last two features allow the trap die to be reduced to only the area required to produce trapping fields. The smaller trap dimensions allow tight focusing of an addressing laser beam for fast single-qubit rotations. Performance of the BGA trap as characterized with 40Ca+ ions is comparable to previous surface-electrode traps in terms of ion heating rate, mode frequency stability, and storage lifetime. We demonstrate two-qubit entanglement operations with 171Yb+ ions in a second BGA trap.

  15. Ball-grid array architecture for microfabricated ion traps

    International Nuclear Information System (INIS)

    Guise, Nicholas D.; Fallek, Spencer D.; Stevens, Kelly E.; Brown, K. R.; Volin, Curtis; Harter, Alexa W.; Amini, Jason M.; Higashi, Robert E.; Lu, Son Thai; Chanhvongsak, Helen M.; Nguyen, Thi A.; Marcus, Matthew S.; Ohnstein, Thomas R.; Youngner, Daniel W.

    2015-01-01

    State-of-the-art microfabricated ion traps for quantum information research are approaching nearly one hundred control electrodes. We report here on the development and testing of a new architecture for microfabricated ion traps, built around ball-grid array (BGA) connections, that is suitable for increasingly complex trap designs. In the BGA trap, through-substrate vias bring electrical signals from the back side of the trap die to the surface trap structure on the top side. Gold-ball bump bonds connect the back side of the trap die to an interposer for signal routing from the carrier. Trench capacitors fabricated into the trap die replace area-intensive surface or edge capacitors. Wirebonds in the BGA architecture are moved to the interposer. These last two features allow the trap die to be reduced to only the area required to produce trapping fields. The smaller trap dimensions allow tight focusing of an addressing laser beam for fast single-qubit rotations. Performance of the BGA trap as characterized with 40 Ca + ions is comparable to previous surface-electrode traps in terms of ion heating rate, mode frequency stability, and storage lifetime. We demonstrate two-qubit entanglement operations with 171 Yb + ions in a second BGA trap

  16. Rectangle Surface Coil Array in a Grid Arrangement for Resonance Imaging

    Science.gov (United States)

    2016-02-13

    magnet wires with insulating coating for rectangular surface coils. The wires are formed into four one turn 145mm x 32mm rectangular coils...switchable array, RF magnetic field, NQR, MRI, NMR, tuning, decoupling I. INTRODUCTION ESONANCE imaging can be accomplished using Nuclear Magnetic ...grid array. This achieves the switchable array configuration. Later, investigations will have circuit controlled multiplexer for switching to

  17. Microcoil Spring Interconnects for Ceramic Grid Array Integrated Circuits

    Science.gov (United States)

    Strickland, S. M.; Hester, J. D.; Gowan, A. K.; Montgomery, R. K.; Geist, D. L.; Blanche, J. F.; McGuire, G. D.; Nash, T. S.

    2011-01-01

    As integrated circuit miniaturization trends continue, they drive the need for smaller higher input/output (I/O) packages. Hermetically sealed ceramic area array parts are the package of choice by the space community for high reliability space flight electronic hardware. Unfortunately, the coefficient of thermal expansion mismatch between the ceramic area array package and the epoxy glass printed wiring board limits the life of the interconnecting solder joint. This work presents the results of an investigation by Marshall Space Flight Center into a method to increase the life of this second level interconnection by the use of compliant microcoil springs. The design of the spring and its attachment process are presented along with thermal cycling results of microcoil springs (MCS) compared with state-of-the-art ball and column interconnections. Vibration testing has been conducted on MCS and high lead column parts. Radio frequency simulation and measurements have been made and the MCS has been modeled and a stress analysis performed. Thermal cycling and vibration testing have shown MCS interconnects to be significantly more reliable than solder columns. Also, MCS interconnects are less prone to handling damage than solder columns. Future work that includes shock testing, incorporation into a digital signal processor board, and process evaluation of expansion from a 400 I/O device to a device with over 1,100 I/O is identified.

  18. Model Building of Photovoltaic Array with MPPT Function and Research on Single Phase Grid Connected

    OpenAIRE

    Li Zhengzhou

    2016-01-01

    With the continued development of solar photovoltaic technology, research on distributed grid connected photovoltaic system has become a research focus in the field of photovoltaic grid power plant and the computer simulation technology is an effective technology means in the study. On the basis of the photovoltaic array output characteristic equation, the photovoltaic array maximum power control simulation model based on M function is established by using MATLAB/Simulink and the simulation m...

  19. Smart signal processing for an evolving electric grid

    Science.gov (United States)

    Silva, Leandro Rodrigues Manso; Duque, Calos Augusto; Ribeiro, Paulo F.

    2015-12-01

    Electric grids are interconnected complex systems consisting of generation, transmission, distribution, and active loads, recently called prosumers as they produce and consume electric energy. Additionally, these encompass a vast array of equipment such as machines, power transformers, capacitor banks, power electronic devices, motors, etc. that are continuously evolving in their demand characteristics. Given these conditions, signal processing is becoming an essential assessment tool to enable the engineer and researcher to understand, plan, design, and operate the complex and smart electronic grid of the future. This paper focuses on recent developments associated with signal processing applied to power system analysis in terms of characterization and diagnostics. The following techniques are reviewed and their characteristics and applications discussed: active power system monitoring, sparse representation of power system signal, real-time resampling, and time-frequency (i.e., wavelets) applied to power fluctuations.

  20. Power systems signal processing for smart grids

    NARCIS (Netherlands)

    Ribeiro, P.F.; Duque, C.A.; Da Silveira, P.M.; Cerqueira, A.S.

    2013-01-01

    With special relation to smart grids, this book provides clear and comprehensive explanation of how Digital Signal Processing (DSP) and Computational Intelligence (CI) techniques can be applied to solve problems in the power system. Its unique coverage bridges the gap between DSP, electrical power

  1. Grid Portal for Image and Video Processing

    International Nuclear Information System (INIS)

    Dinitrovski, I.; Kakasevski, G.; Buckovska, A.; Loskovska, S.

    2007-01-01

    Users are typically best served by G rid Portals . G rid Portals a re web servers that allow the user to configure or run a class of applications. The server is then given the task of authentication of the user with the Grid and invocation of the required grid services to launch the user's application. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. PHP is powerful and modern server-side scripting language producing HTML or XML output which easily can be accessed by everyone via web interface (with the browser of your choice) and can execute shell scripts on the server side. The aim of our work is development of Grid portal for image and video processing. The shell scripts contains gLite and globus commands for obtaining proxy certificate, job submission, data management etc. Using this technique we can easily create web interface to the Grid infrastructure. The image and video processing algorithms are implemented in C++ language using various image processing libraries. (Author)

  2. Enhancing transformer dynamic rating through grid application of photovoltaic arrays

    International Nuclear Information System (INIS)

    El-Gasseir, M.M.; Sayer, M.A.; Alteneder, K.P.; McCulla, G.A.; Bigger, J.

    1993-01-01

    This paper demonstrates that exact matching between the substation's peak-day load profile and the profile of coincident net output generation of the PV array is unjustifiable and will unduly lead to overlooking many investment deferment opportunities that would otherwise be major components of high value applications of PV arrays. Further, the paper shows how and to what extent the load matchability requirement could be relaxed. Because of the thermal inertia of transformers, the output of an adequately sized and located photovoltaic array can both delay and reduce transformer temperature rise even in cases where load peak occurs after sunset. The time lag due to thermal inertia and ambient temperature decline allow overloading of the transformer beyond its normal rating without significant loss of life. Simulations depicting the interplay between PV array capacity, ambient temperature, transformer size, oil and winding temperature rise, peak load magnitude, load profile and loss of life, have been conducted. Tradeoffs between PV array capacity and transformer over-rating gains have been assessed. The impacts of PV generation on the over-rating potential of an actual 22.4-MVA bank transformer of a Salt River Project (SRP) distribution substation in Phoenix, Arizona were evaluated

  3. Power systems signal processing for smart grids

    CERN Document Server

    Ribeiro, Paulo Fernando; Ribeiro, Paulo Márcio; Cerqueira, Augusto Santiago

    2013-01-01

    With special relation to smart grids, this book provides clear and comprehensive explanation of how Digital Signal Processing (DSP) and Computational Intelligence (CI) techniques can be applied to solve problems in the power system. Its unique coverage bridges the gap between DSP, electrical power and energy engineering systems, showing many different techniques applied to typical and expected system conditions with practical power system examples. Surveying all recent advances on DSP for power systems, this book enables engineers and researchers to understand the current state of the art a

  4. Fundamentals of spherical array processing

    CERN Document Server

    Rafaely, Boaz

    2015-01-01

    This book provides a comprehensive introduction to the theory and practice of spherical microphone arrays. It is written for graduate students, researchers and engineers who work with spherical microphone arrays in a wide range of applications.   The first two chapters provide the reader with the necessary mathematical and physical background, including an introduction to the spherical Fourier transform and the formulation of plane-wave sound fields in the spherical harmonic domain. The third chapter covers the theory of spatial sampling, employed when selecting the positions of microphones to sample sound pressure functions in space. Subsequent chapters present various spherical array configurations, including the popular rigid-sphere-based configuration. Beamforming (spatial filtering) in the spherical harmonics domain, including axis-symmetric beamforming, and the performance measures of directivity index and white noise gain are introduced, and a range of optimal beamformers for spherical arrays, includi...

  5. Integrating Scientific Array Processing into Standard SQL

    Science.gov (United States)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  6. ASME Evaluation on Grid Mobile E-Commerce Process

    OpenAIRE

    Dan Chang; Wei Liao

    2012-01-01

    With the development of E-commerce, more scholars have paid attention to research on Mobile E-commerce and mostly focus on the optimization and evaluation of existing process. This paper researches the evaluation of Mobile E-commerce process with a method called ASME. Based on combing and analyzing current mobile business process and utilizing the grid management theory, mobile business process based on grid are constructed. Firstly, the existing process, namely Non-grid Mobile E-commerce, an...

  7. ALMA Array Operations Group process overview

    Science.gov (United States)

    Barrios, Emilio; Alarcon, Hector

    2016-07-01

    ALMA Science operations activities in Chile are responsibility of the Department of Science Operations, which consists of three groups, the Array Operations Group (AOG), the Program Management Group (PMG) and the Data Management Group (DMG). The AOG includes the Array Operators and have the mission to provide support for science observations, operating safely and efficiently the array. The poster describes the AOG process, management and operational tools.

  8. A Probabilistic Approach to Predict Thermal Fatigue Life for Ball Grid Array Solder Joints

    Science.gov (United States)

    Wei, Helin; Wang, Kuisheng

    2011-11-01

    Numerous studies of the reliability of solder joints have been performed. Most life prediction models are limited to a deterministic approach. However, manufacturing induces uncertainty in the geometry parameters of solder joints, and the environmental temperature varies widely due to end-user diversity, creating uncertainties in the reliability of solder joints. In this study, a methodology for accounting for variation in the lifetime prediction for lead-free solder joints of ball grid array packages (PBGA) is demonstrated. The key aspects of the solder joint parameters and the cyclic temperature range related to reliability are involved. Probabilistic solutions of the inelastic strain range and thermal fatigue life based on the Engelmaier model are developed to determine the probability of solder joint failure. The results indicate that the standard deviation increases significantly when more random variations are involved. Using the probabilistic method, the influence of each variable on the thermal fatigue life is quantified. This information can be used to optimize product design and process validation acceptance criteria. The probabilistic approach creates the opportunity to identify the root causes of failed samples from product fatigue tests and field returns. The method can be applied to better understand how variation affects parameters of interest in an electronic package design with area array interconnections.

  9. Principles of Adaptive Array Processing

    Science.gov (United States)

    2006-09-01

    ACE with and without tapering (homogeneous case). These analytical results are less suited to predict the detection performance of a real system ...Nickel: Adaptive Beamforming for Phased Array Radars. Proc. Int. Radar Symposium IRS’98 (Munich, Sept. 1998), DGON and VDE /ITG, pp. 897-906.(Reprint also...strategies for airborne radar. Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, CA, 1998, IEEE Cat.Nr. 0-7803-5148-7/98, pp. 1327-1331. [17

  10. Adaptive motion compensation in sonar array processing

    NARCIS (Netherlands)

    Groen, J.

    2006-01-01

    In recent years, sonar performance has mainly improved via a significant increase in array ap-erture, signal bandwidth and computational power. This thesis aims at improving sonar array processing techniques based on these three steps forward. In applications such as anti-submarine warfare and mine

  11. Reliability of Ceramic Column Grid Array Interconnect Packages Under Extreme Temperatures

    Science.gov (United States)

    Ramesham, Rajeshuni

    2011-01-01

    A paper describes advanced ceramic column grid array (CCGA) packaging interconnects technology test objects that were subjected to extreme temperature thermal cycles. CCGA interconnect electronic package printed wiring boards (PWBs) of polyimide were assembled, inspected nondestructively, and, subsequently, subjected to ex - treme-temperature thermal cycling to assess reliability for future deep-space, short- and long-term, extreme-temperature missions. The test hardware consisted of two CCGA717 packages with each package divided into four daisy-chained sections, for a total of eight daisy chains to be monitored. The package is 33 33 mm with a 27 27 array of 80%/20% Pb/Sn columns on a 1.27-mm pitch. The change in resistance of the daisy-chained CCGA interconnects was measured as a function of the increasing number of thermal cycles. Several catastrophic failures were observed after 137 extreme-temperature thermal cycles, as per electrical resistance measurements, and then the tests were continued through 1,058 thermal cycles to corroborate and understand the test results. X-ray and optical inspection have been made after thermal cycling. Optical inspections were also conducted on the CCGA vs. thermal cycles. The optical inspections were conclusive; the x-ray images were not. Process qualification and assembly is required to optimize the CCGA assembly, which is very clear from the x-rays. Six daisy chains were open out of seven daisy chains, as per experimental test data reported. The daisy chains are open during the cold cycle, and then recover during the hot cycle, though some of them also opened during the hot thermal cycle..

  12. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    Science.gov (United States)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea

  13. High-Speed Monitoring of Multiple Grid-Connected Photovoltaic Array Configurations and Supplementary Weather Station.

    Science.gov (United States)

    Boyd, Matthew T

    2017-06-01

    Three grid-connected monocrystalline silicon photovoltaic arrays have been instrumented with research-grade sensors on the Gaithersburg, MD campus of the National Institute of Standards and Technology (NIST). These arrays range from 73 kW to 271 kW and have different tilts, orientations, and configurations. Irradiance, temperature, wind, and electrical measurements at the arrays are recorded, and images are taken of the arrays to monitor shading and capture any anomalies. A weather station has also been constructed that includes research-grade instrumentation to measure all standard meteorological quantities plus additional solar irradiance spectral bands, full spectrum curves, and directional components using multiple irradiance sensor technologies. Reference photovoltaic (PV) modules are also monitored to provide comprehensive baseline measurements for the PV arrays. Images of the whole sky are captured, along with images of the instrumentation and reference modules to document any obstructions or anomalies. Nearly, all measurements at the arrays and weather station are sampled and saved every 1s, with monitoring having started on Aug. 1, 2014. This report describes the instrumentation approach used to monitor the performance of these photovoltaic systems, measure the meteorological quantities, and acquire the images for use in PV performance and weather monitoring and computer model validation.

  14. Recombination of the steering vector of the triangle grid array in quaternions and the reduction of the MUSIC algorithm

    Science.gov (United States)

    Bai, Chen; Han, Dongjuan

    2018-04-01

    MUSIC is widely used on DOA estimation. Triangle grid is a common kind of the arrangement of array, but it is more complicated than rectangular array in calculation of steering vector. In this paper, the quaternions algorithm can reduce dimension of vector and make the calculation easier.

  15. Novel micromachined on-chip 10-elements wire-grid array operating at 60 GHz

    KAUST Repository

    Sallam, Mai O.

    2017-06-07

    This paper presents a new topology for a wire-grid antenna array which operates at 60 GHz. The array consists of ten λ/2 dipole radiators connected via non-radiating connectors. Both radiators and connectors are placed on top of narrow silicon walls. The antenna is fed with a coplanar microstrip lines placed at the other side of the wafer and is connected with its feeding transmission lines using through-silicon-vias. The antenna is optimized for two cases: using high- and low-resistivity silicon substrates. The former has better radiation characteristics while the later is more compatible with the driving electronic circuits. The antenna has high directivity, reasonable bandwidth and high polarization purity.

  16. The research on multi-projection correction based on color coding grid array

    Science.gov (United States)

    Yang, Fan; Han, Cheng; Bai, Baoxing; Zhang, Chao; Zhao, Yunxiu

    2017-10-01

    There are many disadvantages such as lower timeliness, greater manual intervention in multi-channel projection system, in order to solve the above problems, this paper proposes a multi-projector correction technology based on color coding grid array. Firstly, a color structured light stripe is generated by using the De Bruijn sequences, then meshing the feature information of the color structured light stripe image. We put the meshing colored grid intersection as the center of the circle, and build a white solid circle as the feature sample set of projected images. It makes the constructed feature sample set not only has the perceptual localization, but also has good noise immunity. Secondly, we establish the subpixel geometric mapping relationship between the projection screen and the individual projectors by using the structure of light encoding and decoding based on the color array, and the geometrical mapping relation is used to solve the homography matrix of each projector. Lastly the brightness inconsistency of the multi-channel projection overlap area is seriously interfered, it leads to the corrected image doesn't fit well with the observer's visual needs, and we obtain the projection display image of visual consistency by using the luminance fusion correction algorithm. The experimental results show that this method not only effectively solved the problem of distortion of multi-projection screen and the issue of luminance interference in overlapping region, but also improved the calibration efficient of multi-channel projective system and reduced the maintenance cost of intelligent multi-projection system.

  17. GRID Prototype for imagery processing in scientific applications

    International Nuclear Information System (INIS)

    Stan, Ionel; Zgura, Ion Sorin; Haiduc, Maria; Valeanu, Vlad; Giurgiu, Liviu

    2004-01-01

    The paper presents the results of our study which is part of the InGRID project. This project is supported by ROSA (ROmanian Space Agency). In this paper we will show the possibility to take images from the optical microscope through web camera. The images are then stored on the PC in Linux operating system and distributed to other clusters through GRID technology (using http, php, MySQL, Globus or AliEn systems). The images are provided from nuclear emulsions in the frame of Becquerel Collaboration. The main goal of the project InGRID is to actuate developing and deploying GRID technology for images technique taken from space, different application fields and telemedicine. Also it will create links with the same international projects which use advanced Grid technology and scalable storage solutions. The main topics proposed to be solved in the frame of InGRID project are: - Implementation of two GRID clusters, minimum level Tier 3; - Adapting and updating the common storage and processing computing facility; - Testing the middelware packages developed in the frame of this project; - Testbed production of the prototype; - Build-up and advertise the InGRID prototype in scientific community through current dissemination. InGRID Prototype developed in the frame of this project, will be used by partner institutes as deploying environment of the imaging applications the dynamical features of which will be defined by conditions of contract. Subsequent applications will be deployed by the partners of this project with governmental, nongovernmental and private institutions. (authors)

  18. Array processing for seismic surface waves

    Energy Technology Data Exchange (ETDEWEB)

    Marano, S.

    2013-07-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries.

  19. Array processing for seismic surface waves

    International Nuclear Information System (INIS)

    Marano, S.

    2013-01-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries

  20. Legal constraints on genetic data processing in European grids

    NARCIS (Netherlands)

    Mouw, Evert; van't Noordende, Guido; van Kampen, Antoine H. C.; Louter, Baas; Santcroos, Mark; Olabarriaga, Silvia D.

    2012-01-01

    European laws on privacy and data security are not explicit about the storage and processing of genetic data. Especially whole-genome data is identifying and contains a lot of personal information. Is processing of such data allowed in computing grids? To find out, we looked at legal precedents in

  1. Experimental study of mixing in a square array rod bundle with grid spacer

    International Nuclear Information System (INIS)

    Zong Guifang; Cai Zuti; Zhang Demei

    1989-01-01

    This paper describes the experimental study of mixing in a full scale 15x15 square array rod bundle fuel assembly with 10 mm diameter and 13.3 mm pitch. The experiment was carried out in an open water loop, K 2 CrO 4 was used as tracer. Each subchannel was sampled at the open bundle outlet. Titration, spectrophotometry and fibreoptic methods were used to measure the concentration. The Reynolds numbers ranged from 2.12x10 4 to 4.37x10 4 . For the turbulent mixing of the bare rod bundle, the results of this study agreed with the formulas recommended by other authors. Both flow visualisation studies and the quantitative analysis indicated that flow scattering caused by the grid has a little effect on the mixing. The cause has been examined in this paper. (orig.)

  2. ArrayBridge: Interweaving declarative array processing with high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Haoyuan [The Ohio State Univ., Columbus, OH (United States); Floratos, Sofoklis [The Ohio State Univ., Columbus, OH (United States); Blanas, Spyros [The Ohio State Univ., Columbus, OH (United States); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Prabhat [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Paul [Paradigm4, Inc., Waltham, MA (United States)

    2017-05-04

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.

  3. Construction and Arena Simulation of Grid M-Commerce Process

    OpenAIRE

    Danqing Li; Dan Chang

    2012-01-01

    With the rapid development and the wide use of mobile technology, m-commerce research has gradually become the focus of scholars. Difficulties exist in m-commerce, such as information sharing, business collaboration, and process reengineering. Grid management, a new managerial concept, has the potential of being a powerful weapon that affects the study on m-commerce process. This study systematically analyzes the traditional m-commerce process and its problems. On that basis, this paper const...

  4. An Effective Framework for Distributed Geospatial Query Processing in Grids

    Directory of Open Access Journals (Sweden)

    CHEN, B.

    2010-08-01

    Full Text Available The emergence of Internet has greatly revolutionized the way that geospatial information is collected, managed, processed and integrated. There are several important research issues to be addressed for distributed geospatial applications. First, the performance of geospatial applications is needed to be considered in the Internet environment. In this regard, the Grid as an effective distributed computing paradigm is a good choice. The Grid uses a series of middleware to interconnect and merge various distributed resources into a super-computer with capability of high performance computation. Secondly, it is necessary to ensure the secure use of independent geospatial applications in the Internet environment. The Grid just provides the utility of secure access to distributed geospatial resources. Additionally, it makes good sense to overcome the heterogeneity between individual geospatial information systems in Internet. The Open Geospatial Consortium (OGC proposes a number of generalized geospatial standards e.g. OGC Web Services (OWS to achieve interoperable access to geospatial applications. The OWS solution is feasible and widely adopted by both the academic community and the industry community. Therefore, we propose an integrated framework by incorporating OWS standards into Grids. Upon the framework distributed geospatial queries can be performed in an interoperable, high-performance and secure Grid environment.

  5. Process and installation for welding nuclear fuel assembly grids

    International Nuclear Information System (INIS)

    Vere, B.; Mathevon, P.

    1985-01-01

    The invention proposes a process to weld two sets of perpendicular plates of which the end parts are made integral with a belt piece; the grid is held in a support frame with access openings to the points to be welded on the two faces and on the grid sides; the frame is moved on a mobile table by means of an orientation system along the perpendicular direction of an electron beam welding equipment; each joint to be welded is presented, rotating the frame through 90 deg about an axis and repeating the operation, and rotating the frame about a perpendicular axis and repeating the operation until all the joints on each side of the grid have been welded [fr

  6. Array signal processing in the NASA Deep Space Network

    Science.gov (United States)

    Pham, Timothy T.; Jongeling, Andre P.

    2004-01-01

    In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.

  7. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  8. Reliability Engineering for ATLAS Petascale Data Processing on the Grid

    CERN Document Server

    Golubkov, D V; The ATLAS collaboration; Vaniachine, A V

    2012-01-01

    The ATLAS detector is in its third year of continuous LHC running taking data for physics analysis. A starting point for ATLAS physics analysis is reconstruction of the raw data. First-pass processing takes place shortly after data taking, followed later by reprocessing of the raw data with updated software and calibrations to improve the quality of the reconstructed data for physics analysis. Data reprocessing involves a significant commitment of computing resources and is conducted on the Grid. The reconstruction of one petabyte of ATLAS data with 1B collision events from the LHC takes about three million core-hours. Petascale data processing on the Grid involves millions of data processing jobs. At such scales, the reprocessing must handle a continuous stream of failures. Automatic job resubmission recovers transient failures at the cost of CPU time used by the failed jobs. Orchestrating ATLAS data processing applications to ensure efficient usage of tens of thousands of CPU-cores, reliability engineering ...

  9. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  10. Processes and Materials for Flexible PV Arrays

    National Research Council Canada - National Science Library

    Gierow, Paul

    2002-01-01

    .... A parallel incentive for development of flexible PV arrays are the possibilities of synergistic advantages for certain types of spacecraft, in particular the Solar Thermal Propulsion (STP) Vehicle...

  11. Easy process to obtain suspended graphene flakes on TEM grids

    International Nuclear Information System (INIS)

    Gonçalves, Hugo; Fernandes, Joel; Moura, Cacilda; Schellenberg, Peter; Belsley, Michael; Alves, Luís

    2015-01-01

    Much of the ongoing research on graphene requires free-hanging (suspended) graphene to eliminate any influence from underlying substrates. Several methods have been developed for its preparation but they are either very complex or not completely reliable. Here, we describe a simple method for the transfer of graphene single layers from glass or silicon substrates onto TEM grids. The method uses a carrier film for the transfer process. By optimizing the process yields greater than 60% were achieved. The integrity of the transferred films was confirmed using Raman spectroscopy; successful suspension of both mono- and double-layer graphene sheets was obtained. (paper)

  12. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  13. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    Science.gov (United States)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  14. Design, testing, and economics of a 430 W sub p photovoltaic concentrator array for non grid-connected applications

    Science.gov (United States)

    Maish, A. B.; Rios, M., Jr.; Togami, H.

    A stand-alone 430 W/sub p/ photovoltaic (PV) concentrating system for low power, non grid-connected applications has been designed, fabricated, and tested at Sandia National Laboratories. The array consists of four passively cooled Fresnel lens concentrating modules on a newly developed polar axis tracking structure. Two axis tracking is provided using a self powered clock drive unit mounted on a single post foundation. Test results of tracking accuracy, array output power, parasitic power, performance in winds and array reliability are discussed. using a range of estimated production costs for small production volumes, the life-cycle energy costs have been calculated and compared to the equivalent energy costs of a 3 kW diesel electric generator set and of an equivalent flat panel PV system.

  15. Theory and applications of spherical microphone array processing

    CERN Document Server

    Jarrett, Daniel P; Naylor, Patrick A

    2017-01-01

    This book presents the signal processing algorithms that have been developed to process the signals acquired by a spherical microphone array. Spherical microphone arrays can be used to capture the sound field in three dimensions and have received significant interest from researchers and audio engineers. Algorithms for spherical array processing are different to corresponding algorithms already known in the literature of linear and planar arrays because the spherical geometry can be exploited to great beneficial effect. The authors aim to advance the field of spherical array processing by helping those new to the field to study it efficiently and from a single source, as well as by offering a way for more experienced researchers and engineers to consolidate their understanding, adding either or both of breadth and depth. The level of the presentation corresponds to graduate studies at MSc and PhD level. This book begins with a presentation of some of the essential mathematical and physical theory relevant to ...

  16. Generic nano-imprint process for fabrication of nanowire arrays

    NARCIS (Netherlands)

    Pierret, A.; Hocevar, M.; Diedenhofen, S.L.; Algra, R.E.; Vlieg, E.; Timmering, E.C.; Verschuuren, M.A.; Immink, W.G.G.; Verheijen, M.A.; Bakkers, E.P.A.M.

    2010-01-01

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2inch substrates. After lift-off organic residues remain on the surface, which induce the growth of

  17. Characterization of TES bolometers used in 2-dimensional Backshort-Under-Grid (BUG) arrays for far-infrared astronomy

    International Nuclear Information System (INIS)

    Staguhn, J.G.; Allen, C.A.; Benford, D.J.; Chervenak, J.A.; Chuss, D.T.; Miller, T.M.; Moseley, S.H.; Wollack, E.J.

    2006-01-01

    We have produced a laboratory demonstration of our new Backshort-Under-Grid (BUG) bolometer array architecture in a monolithic, 2-dimensional, 8x8 format. The detector array is designed as a square grid of suspended, 1μm thick silicon bolometers with superconducting molybdium/gold bilayer TESs. These detectors use an additional layer of gold bars deposited on top of the bilayer, oriented transverse to the direction of the current flow, for the suppression of excess noise. This detector design has earlier been shown to provide near fundamental noise limited device performance. We present results from performance measurements of witness devices. In particular we demonstrate that the inband excess noise level of the TES detectors is less than 20% above the thermodynamic phonon noise limit and not significantly higher out of band at frequencies that cannot be attenuated by the Nyquist filter. Our 8x8 BUG arrays will be used in the near future for astronomical observations in several (sub-)millimeter instruments

  18. 2-D Row-Column CMUT Arrays with an Open-Grid Support Structure

    DEFF Research Database (Denmark)

    Christiansen, Thomas Lehrmann; Dahl-Petersen, Christian; Jensen, Jørgen Arendt

    2013-01-01

    Fabrication and characterization of 64 + 64 2-D row-column addressed CMUT arrays with 250 μm element pitch and 4.4 MHz center frequency in air incorporating a new design approach is presented. The arrays are comprised of two wafer bonded, structured silicon-on-insulator wafers featuring an opengr...

  19. gProcess and ESIP Platforms for Satellite Imagery Processing over the Grid

    Science.gov (United States)

    Bacu, Victor; Gorgan, Dorian; Rodila, Denisa; Pop, Florin; Neagu, Gabriel; Petcu, Dana

    2010-05-01

    The Environment oriented Satellite Data Processing Platform (ESIP) is developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) co-funded by the European Commission through FP7 [1]. The gProcess Platform [2] is a set of tools and services supporting the development and the execution over the Grid of the workflow based processing, and particularly the satelite imagery processing. The ESIP [3], [4] is build on top of the gProcess platform by adding a set of satellite image processing software modules and meteorological algorithms. The satellite images can reveal and supply important information on earth surface parameters, climate data, pollution level, weather conditions that can be used in different research areas. Generally, the processing algorithms of the satellite images can be decomposed in a set of modules that forms a graph representation of the processing workflow. Two types of workflows can be defined in the gProcess platform: abstract workflow (PDG - Process Description Graph), in which the user defines conceptually the algorithm, and instantiated workflow (iPDG - instantiated PDG), which is the mapping of the PDG pattern on particular satellite image and meteorological data [5]. The gProcess platform allows the definition of complex workflows by combining data resources, operators, services and sub-graphs. The gProcess platform is developed for the gLite middleware that is available in EGEE and SEE-GRID infrastructures [6]. gProcess exposes the specific functionality through web services [7]. The Editor Web Service retrieves information on available resources that are used to develop complex workflows (available operators, sub-graphs, services, supported resources, etc.). The Manager Web Service deals with resources management (uploading new resources such as workflows, operators, services, data, etc.) and in addition retrieves information on workflows. The Executor Web Service manages the execution of the instantiated workflows

  20. Grid Computing Application for Brain Magnetic Resonance Image Processing

    International Nuclear Information System (INIS)

    Valdivia, F; Crépeault, B; Duchesne, S

    2012-01-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  1. Design and study of a coplanar grid array CdZnTe detector for improved spatial resolution

    International Nuclear Information System (INIS)

    Ma, Yuedong; Xiao, Shali; Yang, Guoqiang; Zhang, Liuqiang

    2014-01-01

    Coplanar grid (CPG) CdZnTe detectors have been used as gamma-ray spectrometers for years. Comparing with pixelated CdZnTe detectors, CPG CdZnTe detectors have either no or poor spatial resolution, which directly limits its use in imaging applications. To address the issue, a 2×2 CPG array CdZnTe detector with dimensions of 7×7×5 mm 3 was fabricated. Each of the CPG pairs in the detector was moderately shrunk in size and precisely designed to improve the spatial resolution while maintaining good energy resolution, considering the charge loss at the surface between the strips of each CPG pairs. Preliminary measurements were demonstrated at an energy resolution of 2.7–3.9% for the four CPG pairs using 662 keV gamma rays and with a spatial resolution of 3.3 mm, which is the best spatial resolution ever achieved for CPG CdZnTe detectors. The results reveal that the CPG CdZnTe detector can also be applied to imaging applications at a substantially higher spatial resolution. - Highlights: • A novel structure of coplanar grid CdZnTe detector was designed to evaluate the possibility of applying the detector to gamma-ray imaging applications. • The best spatial resolution of coplanar grid CdZnTe detectors ever reported has been achieved, along with good spectroscopic performance. • Depth correction of the energy spectra using a new algorithm is presented

  2. Application of optical processing to adaptive phased array radar

    Science.gov (United States)

    Carroll, C. W.; Vijaya Kumar, B. V. K.

    1988-01-01

    The results of the investigation of the applicability of optical processing to Adaptive Phased Array Radar (APAR) data processing will be summarized. Subjects that are covered include: (1) new iterative Fourier transform based technique to determine the array antenna weight vector such that the resulting antenna pattern has nulls at desired locations; (2) obtaining the solution of the optimal Wiener weight vector by both iterative and direct methods on two laboratory Optical Linear Algebra Processing (OLAP) systems; and (3) an investigation of the effects of errors present in OLAP systems on the solution vectors.

  3. Experimental investigation of the ribbon-array ablation process

    International Nuclear Information System (INIS)

    Li Zhenghong; Xu Rongkun; Chu Yanyun; Yang Jianlun; Xu Zeping; Ye Fan; Chen Faxin; Xue Feibiao; Ning Jiamin; Qin Yi; Meng Shijian; Hu Qingyuan; Si Fenni; Feng Jinghua; Zhang Faqiang; Chen Jinchuan; Li Linbo; Chen Dingyang; Ding Ning; Zhou Xiuwen

    2013-01-01

    Ablation processes of ribbon-array loads, as well as wire-array loads for comparison, were investigated on Qiangguang-1 accelerator. The ultraviolet framing images indicate that the ribbon-array loads have stable passages of currents, which produce axially uniform ablated plasma. The end-on x-ray framing camera observed the azimuthally modulated distribution of the early ablated ribbon-array plasma and the shrink process of the x-ray radiation region. Magnetic probes measured the total and precursor currents of ribbon-array and wire-array loads, and there exists no evident difference between the precursor currents of the two types of loads. The proportion of the precursor current to the total current is 15% to 20%, and the start time of the precursor current is about 25 ns later than that of the total current. The melting time of the load material is about 16 ns, when the inward drift velocity of the ablated plasma is taken to be 1.5 × 10 7 cm/s.

  4. Removing Background Noise with Phased Array Signal Processing

    Science.gov (United States)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  5. Process and device for fabricating nuclear fuel assembly grids

    International Nuclear Information System (INIS)

    Thiebaut, B.; Duthoo, D.; Germanaz, J.J.; Angilbert, B.

    1991-01-01

    The method for fabricating PWR fuel assembly grids consists to place the grid of which the constituent parts are held firmly in place within a frame into a sealed chamber full of inert gas. This chamber can rotate about an axis. The welding on one face at a time is carried out with a laser beam orthogonal to the axis orientation of the device. The laser source is outside of the chamber and the beam penetrates via a transparent view port

  6. Generic nano-imprint process for fabrication of nanowire arrays

    Energy Technology Data Exchange (ETDEWEB)

    Pierret, Aurelie; Hocevar, Moira; Algra, Rienk E; Timmering, Eugene C; Verschuuren, Marc A; Immink, George W G; Verheijen, Marcel A; Bakkers, Erik P A M [Philips Research Laboratories Eindhoven, High Tech Campus 11, 5656 AE Eindhoven (Netherlands); Diedenhofen, Silke L [FOM Institute for Atomic and Molecular Physics c/o Philips Research Laboratories, High Tech Campus 4, 5656 AE Eindhoven (Netherlands); Vlieg, E, E-mail: e.p.a.m.bakkers@tue.nl [IMM, Solid State Chemistry, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands)

    2010-02-10

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2 inch substrates. After lift-off organic residues remain on the surface, which induce the growth of additional undesired nanowires. We show that cleaning of the samples before growth with piranha solution in combination with a thermal anneal at 550 deg. C for InP and 700 deg. C for GaP results in uniform nanowire arrays with 1% variation in nanowire length, and without undesired extra nanowires. Our chemical cleaning procedure is applicable to other lithographic techniques such as e-beam lithography, and therefore represents a generic process.

  7. A Versatile Multichannel Digital Signal Processing Module for Microcalorimeter Arrays

    Science.gov (United States)

    Tan, H.; Collins, J. W.; Walby, M.; Hennig, W.; Warburton, W. K.; Grudberg, P.

    2012-06-01

    Different techniques have been developed for reading out microcalorimeter sensor arrays: individual outputs for small arrays, and time-division or frequency-division or code-division multiplexing for large arrays. Typically, raw waveform data are first read out from the arrays using one of these techniques and then stored on computer hard drives for offline optimum filtering, leading not only to requirements for large storage space but also limitations on achievable count rate. Thus, a read-out module that is capable of processing microcalorimeter signals in real time will be highly desirable. We have developed multichannel digital signal processing electronics that are capable of on-board, real time processing of microcalorimeter sensor signals from multiplexed or individual pixel arrays. It is a 3U PXI module consisting of a standardized core processor board and a set of daughter boards. Each daughter board is designed to interface a specific type of microcalorimeter array to the core processor. The combination of the standardized core plus this set of easily designed and modified daughter boards results in a versatile data acquisition module that not only can easily expand to future detector systems, but is also low cost. In this paper, we first present the core processor/daughter board architecture, and then report the performance of an 8-channel daughter board, which digitizes individual pixel outputs at 1 MSPS with 16-bit precision. We will also introduce a time-division multiplexing type daughter board, which takes in time-division multiplexing signals through fiber-optic cables and then processes the digital signals to generate energy spectra in real time.

  8. Renewable energy integration in smart grids-multicriteria assessment using the fuzzy analytical hierarchy process

    OpenAIRE

    JANJIC, ALEKSANDAR; SAVIC, SUZANA; VELIMIROVIC, LAZAR; NIKOLIC, VESNA

    2015-01-01

    Unlike the traditional way of efficiency assessment of renewable energy sources integration, the smart grid concept is introducing new goals and objectives regarding increased use of renewable electricity sources, grid security, energy conservation, energy efficiency, and deregulated energy market. Possible benefits brought by renewable sources integration are evaluated by the degree of the approach to the ideal smart grid. In this paper, fuzzy analytical hierarchy process methodology for the...

  9. A New Method of PV Array Faults Diagnosis in Smart Grid

    Directory of Open Access Journals (Sweden)

    Ze Cheng

    2014-01-01

    Full Text Available A new fault diagnosis method is proposed for PV arrays with SP connection in this study, the advantages of which are that it would minimize the number of sensors needed and that the accuracy and anti-interference ability are improved with the introduction of fuzzy group decision-making theory. We considered five “decision makers” contributing to the diagnosis of PV array faults, including voltage, current, environmental temperature, panel temperature, and solar illumination. The accuracy and reliability of the proposed method were verified experimentally, and the possible factors contributing to diagnosis deviation were analyzed, based on which solutions were suggested to reduce or eliminate errors in aspects of hardware and software.

  10. MAGNETOHYDRODYNAMIC MODELING OF SOLAR SYSTEM PROCESSES ON GEODESIC GRIDS

    International Nuclear Information System (INIS)

    Florinski, V.; Guo, X.; Balsara, D. S.; Meyer, C.

    2013-01-01

    This report describes a new magnetohydrodynamic numerical model based on a hexagonal spherical geodesic grid. The model is designed to simulate astrophysical flows of partially ionized plasmas around a central compact object, such as a star or a planet with a magnetic field. The geodesic grid, produced by a recursive subdivision of a base platonic solid (an icosahedron), is free from control volume singularities inherent in spherical polar grids. Multiple populations of plasma and neutral particles, coupled via charge-exchange interactions, can be simulated simultaneously with this model. Our numerical scheme uses piecewise linear reconstruction on a surface of a sphere in a local two-dimensional 'Cartesian' frame. The code employs Haarten-Lax-van-Leer-type approximate Riemann solvers and includes facilities to control the divergence of the magnetic field and maintain pressure positivity. Several test solutions are discussed, including a problem of an interaction between the solar wind and the local interstellar medium, and a simulation of Earth's magnetosphere.

  11. Total focusing method with correlation processing of antenna array signals

    Science.gov (United States)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  12. Parallel processing and non-uniform grids in global air quality modeling

    NARCIS (Netherlands)

    Berkvens, P.J.F.; Bochev, Mikhail A.

    2002-01-01

    A large-scale global air quality model, running efficiently on a single vector processor, is enhanced to make more realistic and more long-term simulations feasible. Two strategies are combined: non-uniform grids and parallel processing. The communication through the hierarchy of non-uniform grids

  13. Model-based processing for underwater acoustic arrays

    CERN Document Server

    Sullivan, Edmund J

    2015-01-01

    This monograph presents a unified approach to model-based processing for underwater acoustic arrays. The use of physical models in passive array processing is not a new idea, but it has been used on a case-by-case basis, and as such, lacks any unifying structure. This work views all such processing methods as estimation procedures, which then can be unified by treating them all as a form of joint estimation based on a Kalman-type recursive processor, which can be recursive either in space or time, depending on the application. This is done for three reasons. First, the Kalman filter provides a natural framework for the inclusion of physical models in a processing scheme. Second, it allows poorly known model parameters to be jointly estimated along with the quantities of interest. This is important, since in certain areas of array processing already in use, such as those based on matched-field processing, the so-called mismatch problem either degrades performance or, indeed, prevents any solution at all. Third...

  14. Extending the Fermi-LAT Data Processing Pipeline to the Grid

    Science.gov (United States)

    Zimmer, S.; Arrabito, L.; Glanzman, T.; Johnson, T.; Lavalley, C.; Tsaregorodtsev, A.

    2012-12-01

    The Data Handling Pipeline (“Pipeline”) has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the collaboration and the Fermi Science Support Center. Aside from the reconstruction of raw data from the satellite (Level 1), data reprocessing and various event-level analyses are also reasonably heavy loads on the pipeline and computing resources. These other loads, unlike Level 1, can run continuously for weeks or months at a time. In addition it receives heavy use in performing production Monte Carlo tasks. In daily use it receives a new data download every 3 hours and launches about 2000 jobs to process each download, typically completing the processing of the data before the next download arrives. The need for manual intervention has been reduced to less than 0.01% of submitted jobs. The Pipeline software is written almost entirely in Java and comprises several modules. The software comprises web-services that allow online monitoring and provides charts summarizing work flow aspects and performance information. The server supports communication with several batch systems such as LSF and BQS and recently also Sun Grid Engine and Condor. This is accomplished through dedicated job control services that for Fermi are running at SLAC and the other computing site involved in this large scale framework, the Lyon computing center of IN2P3. While being different in the logic of a task, we evaluate a separate interface to the Dirac system in order to communicate with EGI sites to utilize Grid resources, using dedicated Grid optimized systems rather than developing our own. More recently the Pipeline and its associated data catalog have been generalized for use by other experiments, and are

  15. Hydrodynamics of triangular-grid arrays of floating point-absorber wave energy converters with inter-body and bottom slack-mooring connections

    Energy Technology Data Exchange (ETDEWEB)

    Vicente, Pedro C.; Falcao, Antonio F. de O.; Gato, Luiz M.C. [IDMEC, Instituto Superior Tecnico, Technical University of Lisbon, 1049-001 Lisboa (Portugal); Justino, Paulo A.P. [Laboratorio Nacional de Energia e Geologia, 1649-038 Lisboa (Portugal)

    2009-07-01

    It may be convenient that dense arrays of floating point absorbers are spread-moored to the sea bottom through only some of their elements (possibly located in the periphery), while the other array elements are prevented from drifting and colliding with each other by connections to adjacent elements. An array of identical floating point absorbers located at the grid points of an equilateral triangular grid is considered in the paper. A spread set of slack-mooring lines connect the peripheric floaters to the bottom. A weight is located at the centre of each triangle whose function is o pull the three floaters towards each other and keep the inter-body moorings lines under tension. The whole system - buoys, moorings and power take-off systems - is assumed linear, so that a frequency domain analysis may be employed. Hydrodynamic interference between the oscillating bodies is neglected. Equations are presented for a set of three identical point absorbers. This is then extended to more complex equilateral iriangular grid arrays. Results from numerical simulations, with regular and irregular waves, are presented for the motions and power absorption of hemispherical converters in arrays of three and seven elements and different mooring and power take-off parameters, and wave incidence angles. Comparisons are given with the unmoored and independently-moored buoy situations.

  16. MAGNETOHYDRODYNAMIC MODELING OF SOLAR SYSTEM PROCESSES ON GEODESIC GRIDS

    Energy Technology Data Exchange (ETDEWEB)

    Florinski, V. [Department of Physics, University of Alabama, Huntsville, AL 35899 (United States); Guo, X. [Center for Space Plasma and Aeronomic Research, University of Alabama, Huntsville, AL 35899 (United States); Balsara, D. S.; Meyer, C. [Department of Physics, University of Notre Dame, Notre Dame, IN 46556 (United States)

    2013-04-01

    This report describes a new magnetohydrodynamic numerical model based on a hexagonal spherical geodesic grid. The model is designed to simulate astrophysical flows of partially ionized plasmas around a central compact object, such as a star or a planet with a magnetic field. The geodesic grid, produced by a recursive subdivision of a base platonic solid (an icosahedron), is free from control volume singularities inherent in spherical polar grids. Multiple populations of plasma and neutral particles, coupled via charge-exchange interactions, can be simulated simultaneously with this model. Our numerical scheme uses piecewise linear reconstruction on a surface of a sphere in a local two-dimensional 'Cartesian' frame. The code employs Haarten-Lax-van-Leer-type approximate Riemann solvers and includes facilities to control the divergence of the magnetic field and maintain pressure positivity. Several test solutions are discussed, including a problem of an interaction between the solar wind and the local interstellar medium, and a simulation of Earth's magnetosphere.

  17. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    Science.gov (United States)

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  18. High performance GPU processing for inversion using uniform grid searches

    Science.gov (United States)

    Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios

    2017-04-01

    Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on

  19. Signal Processing for a Lunar Array: Minimizing Power Consumption

    Science.gov (United States)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  20. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  1. Physics-based signal processing algorithms for micromachined cantilever arrays

    Science.gov (United States)

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  2. Digi-Clima Grid: image processing and distributed computing for recovering historical climate data

    Directory of Open Access Journals (Sweden)

    Sergio Nesmachnow

    2015-12-01

    Full Text Available This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. A experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.

  3. Systolic array processing of the sequential decoding algorithm

    Science.gov (United States)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  4. Operation of Grid-tied 5 kWDC solar array to develop Laboratory Experiments for Solar PV Energy System courses

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Jaime [Univ. of Texas Pan American, Edinburg, TX (United States)

    2012-12-14

    To unlock the potential of micro grids we plan to build, commission and operate a 5 kWDC PV array and integrate it to the UTPA Engineering building low voltage network, as a micro grid; and promote community awareness. Assisted by a solar radiation tracker providing on-line information of its measurements and performing analysis for the use by the scientific and engineering community, we will write, perform and operate a set of Laboratory experiments and computer simulations supporting Electrical Engineering (graduate and undergraduate) courses on Renewable Energy, as well as Senior Design projects.

  5. Impact study of the Argo array definition in the Mediterranean Sea based on satellite altimetry gridded data

    Science.gov (United States)

    Sanchez-Roman, Antonio; Ruiz, Simón; Pascual, Ananda; Guinehut, Stéphanie; Mourre, Baptiste

    2016-04-01

    The existing Argo network provides essential data in near real time to constrain monitoring and forecasting centers and strongly complements the observations of the ocean surface from space. The comparison of Sea Level Anomalies (SLA) provided by satellite altimeters with in-situ Dynamic Heights Anomalies (DHA) derived from the temperature and salinity profiles of Argo floats contribute to better characterize the error budget associated with the altimeter observations. In this work, performed in the frame of the E-AIMS FP7 European Project, we focus on the Argo observing system in the Mediterranean Sea and its impact on SLA fields provided by satellite altimetry measurements in the basin. Namely, we focus on the sensitivity of specific SLA gridded merged products provided by AVISO in the Mediterranean to the reference depth (400 or 900 dbar) selected in the computation of the Argo Dynamic Height (DH) as an integration of the Argo T/S profiles through the water column. This reference depth will have impact on the number of valid Argo profiles and therefore on their temporal sampling and the coverage by the network used to compare with altimeter data. To compare both datasets, altimeter grids and synthetic climatologies used to compute DHA were spatially and temporally interpolated at the position and time of each in-situ Argo profile by a mapping method based on an optimal interpolation scheme. The analysis was conducted in the entire Mediterranean Sea and different sub-regions of the basin. The second part of this work is devoted to investigate which configuration in terms of spatial sampling of the Argo array in the Mediterranean will properly reproduce the mesoscale dynamics in this basin, which is comprehensively captured by new standards of specific altimeter products for this region. To do that, several Observing System Simulation Experiments (OSSEs) were conducted assuming that altimetry data computed from AVISO specific reanalysis gridded merged product for

  6. SAR processing with stepped chirps and phased array antennas.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  7. Development of a low cost integrated 15 kW A.C. solar tracking sub-array for grid connected PV power system applications

    Science.gov (United States)

    Stern, M.; West, R.; Fourer, G.; Whalen, W.; Van Loo, M.; Duran, G.

    1997-02-01

    Utility Power Group has achieved a significant reduction in the installed cost of grid-connected PV systems. The two part technical approach focused on 1) The utilization of a large area factory assembled PV panel, and 2) The integration and packaging of all sub-array power conversion and control functions within a single factory produced enclosure. Eight engineering prototype 15kW ac single axis solar tracking sub-arrays were designed, fabricated, and installed at the Sacramento Municipal Utility District's Hedge Substation site in 1996 and are being evaluated for performance and reliability. A number of design enhancements will be implemented in 1997 and demonstrated by the field deployment and operation of over twenty advanced sub-array PV power systems.

  8. A measurement method for micro 3D shape based on grids-processing and stereovision technology

    International Nuclear Information System (INIS)

    Li, Chuanwei; Xie, Huimin; Liu, Zhanwei

    2013-01-01

    An integrated measurement method for micro 3D surface shape by a combination of stereovision technology in a scanning electron microscope (SEM) and grids-processing methodology is proposed. The principle of the proposed method is introduced in detail. By capturing two images of the tested specimen with grids on the surface at different tilt angles in an SEM, the 3D surface shape of the specimen can be obtained. Numerical simulation is applied to analyze the feasibility of the proposed method. A validation experiment is performed here. The surface shape of the metal-wire/polymer-membrane structures with thermal deformation is reconstructed. By processing the surface grids of the specimen, the out-of-plane displacement field of the specimen surface is also obtained. Compared with the measurement results obtained by a 3D digital microscope, the experimental error of the proposed method is discussed (paper)

  9. Processing moldable tasks on the grid: Late job binding with lightweight user-level overlay

    CERN Document Server

    Moscicki, J T; Sloot, P M A; Lamanna, M

    2011-01-01

    Independent observations and everyday user experience indicate that performance and reliability of large grid infrastructures may suffer from large and unpredictable variations. In this paper we study the impact of the job queuing time on processing of moldable tasks which are commonly found in large-scale production grids. We use the mean value and variance of makespan as the quality of service indicators. We develop a general task processing model to provide a quantitative comparison between two models: early and late job binding in a user-level overlay applied to the EGEE Grid infrastructure. We find that the late-binding model effectively defines a transformation of the distribution of makespan according to the Central Limit Theorem. As demonstrated by Monte Carlo simulations using real job traces, this transformation allows to substantially reduce the mean value and variance of makespan. For certain classes of applications task granularity may be adjusted such that a speedup of an order of magnitude or m...

  10. Mosaic Process for the Fabrication of an Acoustic Transducer Array

    National Research Council Canada - National Science Library

    2005-01-01

    .... Deriving a geometric shape for the array based on the established performance level. Selecting piezoceramic materials based on considerations related to the performance level and derived geometry...

  11. Analysis the Transient Process of Wind Power Resources when there are Voltage Sags in Distribution Grid

    Science.gov (United States)

    Nhu Y, Do

    2018-03-01

    Vietnam has many advantages of wind power resources. Time by time there are more and more capacity as well as number of wind power project in Vietnam. Corresponding to the increase of wind power emitted into national grid, It is necessary to research and analyze in order to ensure the safety and reliability of win power connection. In national distribution grid, voltage sag occurs regularly, it can strongly influence on the operation of wind power. The most serious consequence is the disconnection. The paper presents the analysis of distribution grid's transient process when voltage is sagged. Base on the analysis, the solutions will be recommended to improve the reliability and effective operation of wind power resources.

  12. Incorporating Semantic Knowledge into Dynamic Data Processing for Smart Power Grids

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qunzhi; Simmhan, Yogesh; Prasanna, Viktor

    2012-11-15

    Semantic Web allows us to model and query time-invariant or slowly evolving knowledge using ontologies. Emerging applications in Cyber Physical Systems such as Smart Power Grids that require continuous information monitoring and integration present novel opportunities and challenges for Semantic Web technologies. Semantic Web is promising to model diverse Smart Grid domain knowledge for enhanced situation awareness and response by multi-disciplinary participants. However, current technology does pose a performance overhead for dynamic analysis of sensor measurements. In this paper, we combine semantic web and complex event processing for stream based semantic querying. We illustrate its adoption in the USC Campus Micro-Grid for detecting and enacting dynamic response strategies to peak power situations by diverse user roles. We also describe the semantic ontology and event query model that supports this. Further, we introduce and evaluate caching techniques to improve the response time for semantic event queries to meet our application needs and enable sustainable energy management.

  13. Blind Time-Frequency Analysis for Source Discrimination in Multisensor Array Processing

    National Research Council Canada - National Science Library

    Amin, Moeness

    1999-01-01

    .... We have clearly demonstrated, through analysis and simulations, the offerings of time-frequency distributions in solving key problems in sensor array processing, including direction finding, source...

  14. McRunjob: A High Energy Physics Workflow Planner for Grid Production Processing

    CERN Document Server

    Graham, G E; Bertram, I; Graham, Gregory E.; Evans, Dave; Bertram, Iain

    2003-01-01

    McRunjob is a powerful grid workflow manager used to manage the generation of large numbers of production processing jobs in High Energy Physics. In use at both the DZero and CMS experiments, McRunjob has been used to manage large Monte Carlo production processing since 1999 and is being extended to uses in regular production processing for analysis and reconstruction. Described at CHEP 2001, McRunjob converts core metadata into jobs submittable in a variety of environments. The powerful core metadata description language includes methods for converting the metadata into persistent forms, job descriptions, multi-step workflows, and data provenance information. The language features allow for structure in the metadata by including full expressions, namespaces, functional dependencies, site specific parameters in a grid environment, and ontological definitions. It also has simple control structures for parallelization of large jobs. McRunjob features a modular design which allows for easy expansion to new job d...

  15. Signal processing for solar array monitoring, fault detection, and optimization

    CERN Document Server

    Braun, Henry; Spanias, Andreas

    2012-01-01

    Although the solar energy industry has experienced rapid growth recently, high-level management of photovoltaic (PV) arrays has remained an open problem. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in PV arrays in order to improve their management. In this book, we examine the potential role of sensing and monitoring technology in a PV context, focusing on the areas of fault detection, topology optimization, and performance evaluation/data visualization. First, several types of commonly occurring PV array faults are considered and detection algorithms are described. Next, the potential for dynamic optimization of an array's topology is discussed, with a focus on mitigation of fault conditions and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presen...

  16. Sampling phased array a new technique for signal processing and ultrasonic imaging

    OpenAIRE

    Bulavinov, A.; Joneit, D.; Kröning, M.; Bernus, L.; Dalichow, M.H.; Reddy, K.M.

    2006-01-01

    Different signal processing and image reconstruction techniques are applied in ultrasonic non-destructive material evaluation. In recent years, rapid development in the fields of microelectronics and computer engineering lead to wide application of phased array systems. A new phased array technique, called "Sampling Phased Array" has been developed in Fraunhofer Institute for non-destructive testing. It realizes unique approach of measurement and processing of ultrasonic signals. The sampling...

  17. New data processing technologies at LHC: From Grid to Cloud Computing and beyond

    International Nuclear Information System (INIS)

    De Salvo, A.

    2011-01-01

    Since a few years the LHC experiments at CERN are successfully using the Grid Computing Technologies for their distributed data processing activities, on a global scale. Recently, the experience gained with the current systems allowed the design of the future Computing Models, involving new technologies like Could Computing, virtualization and high performance distributed database access. In this paper we shall describe the new computational technologies of the LHC experiments at CERN, comparing them with the current models, in terms of features and performance.

  18. McRunjob: A High Energy Physics Workflow Planner for Grid Production Processing

    OpenAIRE

    Graham, G E; Evans, D; Bertram, I

    2003-01-01

    McRunjob is a powerful grid workflow manager used to manage the generation of large numbers of production processing jobs in High Energy Physics. In use at both the DZero and CMS experiments, McRunjob has been used to manage large Monte Carlo production processing since 1999 and is being extended to uses in regular production processing for analysis and reconstruction. Described at CHEP 2001, McRunjob converts core metadata into jobs submittable in a variety of environments. The powerful core...

  19. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.

    Science.gov (United States)

    Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F

    2015-10-15

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    Science.gov (United States)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues

  1. High aspect ratio silver grid transparent electrodes using UV embossing process

    Directory of Open Access Journals (Sweden)

    Dong Jin Kim

    2017-10-01

    Full Text Available This study presents a UV embossing process to fabricate high aspect ratio silver grid transparent electrodes on a polymer film. Transparent electrodes with a high optical transmittance (93 % and low sheet resistance (4.6 Ω/sq were fabricated without any high temperature or vacuum processes. The strong adhesion force between the UV resin and the silver ink enables the fabrication of silver microstructures with an aspect ratio higher than 3. The high aspect ratio results in a low sheet resistance while maintaining a high optical transmittance. Multi-layer transparent electrodes were fabricated by repeating the proposed UV process. Additionally, a large-area of 8-inch touch panel was fabricated with the proposed UV process. The proposed UV process is a relatively simple and low cost process making it suitable for large-area production as well as mass production.

  2. Grid infrastructure for automatic processing of SAR data for flood applications

    Science.gov (United States)

    Kussul, Natalia; Skakun, Serhiy; Shelestov, Andrii

    2010-05-01

    More and more geosciences applications are being put on to the Grids. Due to the complexity of geosciences applications that is caused by complex workflow, the use of computationally intensive environmental models, the need of management and integration of heterogeneous data sets, Grid offers solutions to tackle these problems. Many geosciences applications, especially those related to the disaster management and mitigations require the geospatial services to be delivered in proper time. For example, information on flooded areas should be provided to corresponding organizations (local authorities, civil protection agencies, UN agencies etc.) no more than in 24 h to be able to effectively allocate resources required to mitigate the disaster. Therefore, providing infrastructure and services that will enable automatic generation of products based on the integration of heterogeneous data represents the tasks of great importance. In this paper we present Grid infrastructure for automatic processing of synthetic-aperture radar (SAR) satellite images to derive flood products. In particular, we use SAR data acquired by ESA's ENVSAT satellite, and neural networks to derive flood extent. The data are provided in operational mode from ESA rolling archive (within ESA Category-1 grant). We developed a portal that is based on OpenLayers frameworks and provides access point to the developed services. Through the portal the user can define geographical region and search for the required data. Upon selection of data sets a workflow is automatically generated and executed on the resources of Grid infrastructure. For workflow execution and management we use Karajan language. The workflow of SAR data processing consists of the following steps: image calibration, image orthorectification, image processing with neural networks, topographic effects removal, geocoding and transformation to lat/long projection, and visualisation. These steps are executed by different software, and can be

  3. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design

    Science.gov (United States)

    Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-01-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative

  4. Application of Seismic Array Processing to Tsunami Early Warning

    Science.gov (United States)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800

  5. Oxide nano-rod array structure via a simple metallurgical process

    International Nuclear Information System (INIS)

    Nanko, M; Do, D T M

    2011-01-01

    A simple method for fabricating oxide nano-rod array structure via metallurgical process is reported. Some dilute alloys such as Ni(Al) solid solution shows internal oxidation with rod-like oxide precipices during high-temperature oxidation with low oxygen partial pressure. By removing a metal part in internal oxidation zone, oxide nano-rod array structure can be developed on the surface of metallic components. In this report, Al 2 O 3 or NiAl 2 O 4 nano-rod array structures were prepared by using Ni(Al) solid solution. Effects of Cr addition into Ni(Al) solid solution on internal oxidation were also reported. Pack cementation process for aluminizing of Ni surface was applied to prepare nano-rod array components with desired shape. Near-net shape Ni components with oxide nano-rod array structure on their surface can be prepared by using the pack cementation process and internal oxidation,

  6. Enhancement of Efficiency and Reduction of Grid Thickness Variation on Casting Process with Lean Six Sigma Method

    Science.gov (United States)

    Witantyo; Setyawan, David

    2018-03-01

    In a lead acid battery industry, grid casting is a process that has high defect and thickness variation level. DMAIC (Define-Measure-Analyse-Improve-Control) method and its tools will be used to improve the casting process. In the Define stage, it is used project charter and SIPOC (Supplier Input Process Output Customer) method to map the existent problem. In the Measure stage, it is conducted a data retrieval related to the types of defect and the amount of it, also the grid thickness variation that happened. And then the retrieved data is processed and analyzed by using 5 Why’s and FMEA method. In the Analyze stage, it is conducted a grid observation that experience fragile and crack type of defect by using microscope showing the amount of oxide Pb inclusion in the grid. Analysis that is used in grid casting process shows the difference of temperature that is too high between the metal fluid and mold temperature, also the corking process that doesn’t have standard. The Improve stage is conducted a fixing process which generates the reduction of grid variation thickness level and defect/unit level from 9,184% to 0,492%. In Control stage, it is conducted a new working standard determination and already fixed control process.

  7. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  8. Decreasing Data Analytics Time: Hybrid Architecture MapReduce-Massive Parallel Processing for a Smart Grid

    Directory of Open Access Journals (Sweden)

    Abdeslam Mehenni

    2017-03-01

    Full Text Available As our populations grow in a world of limited resources enterprise seek ways to lighten our load on the planet. The idea of modifying consumer behavior appears as a foundation for smart grids. Enterprise demonstrates the value available from deep analysis of electricity consummation histories, consumers’ messages, and outage alerts, etc. Enterprise mines massive structured and unstructured data. In a nutshell, smart grids result in a flood of data that needs to be analyzed, for better adjust to demand and give customers more ability to delve into their power consumption. Simply put, smart grids will increasingly have a flexible data warehouse attached to them. The key driver for the adoption of data management strategies is clearly the need to handle and analyze the large amounts of information utilities are now faced with. New approaches to data integration are nauseating moment; Hadoop is in fact now being used by the utility to help manage the huge growth in data whilst maintaining coherence of the Data Warehouse. In this paper we define a new Meter Data Management System Architecture repository that differ with three leaders MDMS, where we use MapReduce programming model for ETL and Parallel DBMS in Query statements(Massive Parallel Processing MPP.

  9. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    Science.gov (United States)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  10. Efficient processing of two-dimensional arrays with C or C++

    Science.gov (United States)

    Donato, David I.

    2017-07-20

    Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency

  11. MCRUNJOB: A High energy physics workflow planner for grid production processing

    International Nuclear Information System (INIS)

    Graham, Gregory E.

    2004-01-01

    McRunjob is a powerful grid workflow manager used to manage the generation of large numbers of production processing jobs in High Energy Physics. In use at both the DZero and CMS experiments, McRunjob has been used to manage large Monte Carlo production processing since 1999 and is being extended to uses in regular production processing for analysis and reconstruction. Described at CHEP 2001, McRunjob converts core metadata into jobs submittable in a variety of environments. The powerful core metadata description language includes methods for converting the metadata into persistent forms, job descriptions, multi-step workflows, and data provenance information. The language features allow for structure in the metadata by including full expressions, namespaces, functional dependencies, site specific parameters in a grid environment, and ontological definitions. It also has simple control structures for parallelization of large jobs. McRunjob features a modular design which allows for easy expansion to new job description languages or new application level tasks

  12. Process development for the manufacturing of state-of-the-art spacer grids

    Energy Technology Data Exchange (ETDEWEB)

    Schebitz, Florian; Dietrich, Matthias [Advanced Nuclear Fuels GmbH, Karlstein (Germany)

    2013-07-01

    At the beginning it was questioned if 'time to market' is really important for the nuclear industry. The clear answer is YES. Even if the development times might be longer compared to projects in other industries it is still beneficial to use concurrent engineering. In the world wide network of manufacturing sites, Advanced Nuclear Fuels GmbH in Karlstein is quite often involved when the development of new processes is necessary. As ANF Karlstein is delivering products around the world the experience with different customer requirements supports an optimized solution in order to fulfill these principle requirements and to deliver state-of-the-art products like spacer grids. Continues feedback from process development already improves the first prototypes. In the meantime ANF Karlstein manufactured the components for both new fuel assembly designs which are introduced as a first set of Lead Fuel Assemblies. For the manufacturing of the next sets of spacer grids (for tests and next series of Lead Fuel Assemblies) the described processes will be used and further improved, so that an industrialized solution is available. (orig.)

  13. Sampling phased array - a new technique for ultrasonic signal processing and imaging

    OpenAIRE

    Verkooijen, J.; Boulavinov, A.

    2008-01-01

    Over the past 10 years, the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called 'Sampling Phased Array', has been developed in the Fraunhofer Institute for Non-Destructive Testing([1]). It realises a unique approach of measurement and processing of ultrasonic signals. Th...

  14. Sampling phased array, a new technique for ultrasonic signal processing and imaging now available to industry

    OpenAIRE

    Verkooijen, J.; Bulavinov, A.

    2008-01-01

    Over the past 10 years the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called "Sampling Phased Array" has been developed in the Fraunhofer Institute for non-destructive testing [1]. It realizes a unique approach of measurement and processing of ultrasonic signals. The s...

  15. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    International Nuclear Information System (INIS)

    Lavoie-Courchesne, S; Chouinard-Decorte, F; Doyon, J; Bellec, P; Rioux, P; Sherif, T; Rousseau, M-E; Das, S; Adalat, R; Evans, A C; Craddock, C; Margulies, D; Chu, C; Lyttelton, O

    2012-01-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  16. Studies of implosion processes of nested tungsten wire-array Z-pinch

    International Nuclear Information System (INIS)

    Ning Cheng; Ding Ning; Liu Quan; Yang Zhenhua

    2006-01-01

    Nested wire-array is a kind of promising structured-load because it can improve the quality of Z-pinch plasma and enhance the radiation power of X-ray source. Based on the zero-dimensional model, the assumption of wire-array collision, and the criterion of optimized load (maximal load kinetic energy), optimization of the typical nested wire-array as a load of Z machine at Sandia Laboratory was carried out. It was shown that the load has been basically optimized. The Z-pinch process of the typical load was numerically studied by means of one-dimensional three-temperature radiation magneto-hydrodynamics (RMHD) code. The obtained results reproduce the dynamic process of the Z-pinch and show the implosion trajectory of nested wire-array and the transfer process of drive current between the inner and outer array. The experimental and computational X-ray pulse was compared, and it was suggested that the assumption of wire-array collision was reasonable in nested wire-array Z-pinch at least for the current level of Z machine. (authors)

  17. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  18. Improving SCADA security of a local process with a power grid model

    NARCIS (Netherlands)

    Chromik, Justyna Joanna; Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.

    Security of networks controlling smart grids is an important subject. The shift of the power grid towards a smart grid results in more distributed control functions, while intrusion detection of the control network mostly remains centrally based. Moreover, existing local (host-based) intrusion

  19. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    Science.gov (United States)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  20. Signal and array processing techniques for RFID readers

    Science.gov (United States)

    Wang, Jing; Amin, Moeness; Zhang, Yimin

    2006-05-01

    Radio Frequency Identification (RFID) has recently attracted much attention in both the technical and business communities. It has found wide applications in, for example, toll collection, supply-chain management, access control, localization tracking, real-time monitoring, and object identification. Situations may arise where the movement directions of the tagged RFID items through a portal is of interest and must be determined. Doppler estimation may prove complicated or impractical to perform by RFID readers. Several alternative approaches, including the use of an array of sensors with arbitrary geometry, can be applied. In this paper, we consider direction-of-arrival (DOA) estimation techniques for application to near-field narrowband RFID problems. Particularly, we examine the use of a pair of RFID antennas to track moving RFID tagged items through a portal. With two antennas, the near-field DOA estimation problem can be simplified to a far-field problem, yielding a simple way for identifying the direction of the tag movement, where only one parameter, the angle, needs to be considered. In this case, tracking of the moving direction of the tag simply amounts to computing the spatial cross-correlation between the data samples received at the two antennas. It is pointed out that the radiation patterns of the reader and tag antennas, particularly their phase characteristics, have a significant effect on the performance of DOA estimation. Indoor experiments are conducted in the Radar Imaging and RFID Labs at Villanova University for validating the proposed technique for target movement direction estimations.

  1. Simulating multi-scale oceanic processes around Taiwan on unstructured grids

    Science.gov (United States)

    Yu, Hao-Cheng; Zhang, Yinglong J.; Yu, Jason C. S.; Terng, C.; Sun, Weiling; Ye, Fei; Wang, Harry V.; Wang, Zhengui; Huang, Hai

    2017-11-01

    We validate a 3D unstructured-grid (UG) model for simulating multi-scale processes as occurred in Northwestern Pacific around Taiwan using recently developed new techniques (Zhang et al., Ocean Modeling, 102, 64-81, 2016) that require no bathymetry smoothing even for this region with prevalent steep bottom slopes and many islands. The focus is on short-term forecast for several months instead of long-term variability. Compared with satellite products, the errors for the simulated Sea-surface Height (SSH) and Sea-surface Temperature (SST) are similar to a reference data-assimilated global model. In the nearshore region, comparison with 34 tide gauges located around Taiwan indicates an average RMSE of 13 cm for the tidal elevation. The average RMSE for SST at 6 coastal buoys is 1.2 °C. The mean transport and eddy kinetic energy compare reasonably with previously published values and the reference model used to provide boundary and initial conditions. The model suggests ∼2-day interruption of Kuroshio east of Taiwan during a typhoon period. The effect of tidal mixing is shown to be significant nearshore. The multi-scale model is easily extendable to target regions of interest due to its UG framework and a flexible vertical gridding system, which is shown to be superior to terrain-following coordinates.

  2. The numerical solution of thawing process in phase change slab using variable space grid technique

    Directory of Open Access Journals (Sweden)

    Serttikul, C.

    2007-09-01

    Full Text Available This paper focuses on the numerical analysis of melting process in phase change material which considers the moving boundary as the main parameter. In this study, pure ice slab and saturated porous packed bed are considered as the phase change material. The formulation of partial differential equations is performed consisting heat conduction equations in each phase and moving boundary equation (Stefan equation. The variable space grid method is then applied to these equations. The transient heat conduction equations and the Stefan condition are solved by using the finite difference method. A one-dimensional melting model is then validated against the available analytical solution. The effect of constant temperature heat source on melting rate and location of melting front at various times is studied in detail.It is found that the nonlinearity of melting rate occurs for a short time. The successful comparison with numerical solution and analytical solution should give confidence in the proposed mathematical treatment, and encourage the acceptance of this method as useful tool for exploring practical problems such as forming materials process, ice melting process, food preservation process and tissue preservation process.

  3. APD arrays and large-area APDs via a new planar process

    CERN Document Server

    Farrell, R; Vanderpuye, K; Grazioso, R; Myers, R; Entine, G

    2000-01-01

    A fabrication process has been developed which allows the beveled-edge-type of avalanche photodiode (APD) to be made without the need for the artful bevel formation steps. This new process, applicable to both APD arrays and to discrete detectors, greatly simplifies manufacture and should lead to significant cost reduction for such photodetectors. This is achieved through a simple innovation that allows isolation around the device or array pixel to be brought into the plane of the surface of the silicon wafer, hence a planar process. A description of the new process is presented along with performance data for a variety of APD device and array configurations. APD array pixel gains in excess of 10 000 have been measured. Array pixel coincidence timing resolution of less than 5 ns has been demonstrated. An energy resolution of 6% for 662 keV gamma-rays using a CsI(T1) scintillator on a planar processed large-area APD has been recorded. Discrete APDs with active areas up to 13 cm sup 2 have been operated.

  4. Assembly and Integration Process of the First High Density Detector Array for the Atacama Cosmology Telescope

    Science.gov (United States)

    Li, Yaqiong; Choi, Steve; Ho, Shuay-Pwu; Crowley, Kevin T.; Salatino, Maria; Simon, Sara M.; Staggs, Suzanne T.; Nati, Federico; Wollack, Edward J.

    2016-01-01

    The Advanced ACTPol (AdvACT) upgrade on the Atacama Cosmology Telescope (ACT) consists of multichroicTransition Edge Sensor (TES) detector arrays to measure the Cosmic Microwave Background (CMB) polarization anisotropies in multiple frequency bands. The first AdvACT detector array, sensitive to both 150 and 230 GHz, is fabricated on a 150 mm diameter wafer and read out with a completely different scheme compared to ACTPol. Approximately 2000 TES bolometers are packed into the wafer leading to both a much denser detector density and readout circuitry. The demonstration of the assembly and integration of the AdvACT arrays is important for the next generation CMB experiments, which will continue to increase the pixel number and density. We present the detailed assembly process of the first AdvACT detector array.

  5. Using adaptive antenna array in LTE with MIMO for space-time processing

    Directory of Open Access Journals (Sweden)

    Abdourahamane Ahmed Ali

    2015-04-01

    Full Text Available The actual methods of improvement the existent wireless transmission systems are proposed. Mathematical apparatus is considered and proved by models, graph of which are shown, using the adaptive array antenna in LTE with MIMO for space-time processing. The results show that improvements, which are joined with space-time processing, positively reflects on LTE cell size or on throughput

  6. A Wireless and Batteryless Microsystem with Implantable Grid Electrode/3-Dimensional Probe Array for ECoG and Extracellular Neural Recording in Rats

    Directory of Open Access Journals (Sweden)

    Chih-Wei Chang

    2013-04-01

    Full Text Available This paper presents the design and implementation of an integrated wireless microsystem platform that provides the possibility to support versatile implantable neural sensing devices in free laboratory rats. Inductive coupled coils with low dropout regulator design allows true long-term recording without limitation of battery capacity. A 16-channel analog front end chip located on the headstage is designed for high channel account neural signal conditioning with low current consumption and noise. Two types of implantable electrodes including grid electrode and 3D probe array are also presented for brain surface recording and 3D biopotential acquisition in the implanted target volume of tissue. The overall system consumes less than 20 mA with small form factor, 3.9 × 3.9 cm2 mainboard and 1.8 × 3.4 cm2 headstage, is packaged into a backpack for rats. Practical in vivo recordings including auditory response, brain resection tissue and PZT-induced seizures recording demonstrate the correct function of the proposed microsystem. Presented achievements addressed the aforementioned properties by combining MEMS neural sensors, low-power circuit designs and commercial chips into system-level integration.

  7. Grid synchronization for advanced power processing and FACTS in wind power systems

    DEFF Research Database (Denmark)

    Luna, A.; Rocabert, J.; Vazquez, G.

    2010-01-01

    The high penetration of wind power systems in the electrical network has introduced new issues in the stability and transient operation of the grid. By means of providing advanced functionalities to the existing power converters of such power plants it is possible to enhance their performance...... and also to support the grid operation, as the new grid codes demand. The connection of FACTS based on power converters, such as STATCOMs, are also contributing to the integration of renewable energies improving their behavior under contingencies. However, in both cases it is needed to have a grid voltage...

  8. A FPGA-based signal processing unit for a GEM array detector

    International Nuclear Information System (INIS)

    Yen, W.W.; Chou, H.P.

    2013-06-01

    in the present study, a signal processing unit for a GEM one-dimensional array detector is presented to measure the trajectory of photoelectrons produced by cosmic X-rays. The present GEM array detector system has 16 signal channels. The front-end unit provides timing signals from trigger units and energy signals from charge sensitive amplifies. The prototype of the processing unit is implemented using commercial field programmable gate array circuit boards. The FPGA based system is linked to a personal computer for testing and data analysis. Tests using simulated signals indicated that the FPGA-based signal processing unit has a good linearity and is flexible for parameter adjustment for various experimental conditions (authors)

  9. Characterization of diffusivity based on spherical array processing

    DEFF Research Database (Denmark)

    Nolan, Melanie; Fernandez Grande, Efren; Jeong, Cheol-Ho

    2015-01-01

    -dimensional domain and consequently examine some of its fundamental properties: spatial distribution of sound pressure levels, particle velocity and sound intensity. The study allows for visualization of the intensity field inside a reverberant space, and successfully illustrates the behavior of the sound field...... in such an environment. This initial investigation shows the validity of the suggested processing and reveals interesting perspectives for future work. Ultimately, the aim is to define a proper and reliable measure of the diffuse sound field conditions in a reverberation chamber, with the prospect of improving...

  10. A multi-step electrochemical etching process for a three-dimensional micro probe array

    International Nuclear Information System (INIS)

    Kim, Yoonji; Youn, Sechan; Cho, Young-Ho; Park, HoJoon; Chang, Byeung Gyu; Oh, Yong Soo

    2011-01-01

    We present a simple, fast, and cost-effective process for three-dimensional (3D) micro probe array fabrication using multi-step electrochemical metal foil etching. Compared to the previous electroplating (add-on) process, the present electrochemical (subtractive) process results in well-controlled material properties of the metallic microstructures. In the experimental study, we describe the single-step and multi-step electrochemical aluminum foil etching processes. In the single-step process, the depth etch rate and the bias etch rate of an aluminum foil have been measured as 1.50 ± 0.10 and 0.77 ± 0.03 µm min −1 , respectively. On the basis of the single-step process results, we have designed and performed the two-step electrochemical etching process for the 3D micro probe array fabrication. The fabricated 3D micro probe array shows the vertical and lateral fabrication errors of 15.5 ± 5.8% and 3.3 ± 0.9%, respectively, with the surface roughness of 37.4 ± 9.6 nm. The contact force and the contact resistance of the 3D micro probe array have been measured to be 24.30 ± 0.98 mN and 2.27 ± 0.11 Ω, respectively, for an overdrive of 49.12 ± 1.25 µm.

  11. DBPM signal processing with field programmable gate arrays

    International Nuclear Information System (INIS)

    Lai Longwei; Yi Xing; Zhang Ning; Yang Guisen; Wang Baopeng; Xiong Yun; Leng Yongbin; Yan Yingbing

    2011-01-01

    DBPM system performance is determined by the design and implementation of beam position signal processing algorithm. In order to develop the system, a beam position signal processing algorithm is implemented on FPGA. The hardware is a PMC board ICS-1554A-002 (GE Corp.) with FPGA chip XC5VSX95T. This paper adopts quadrature frequency mixing to down convert high frequency signal to base. Different from conventional method, the mixing is implemented by CORDIC algorithm. The algorithm theory and implementation details are discussed in this paper. As the board contains no front end gain controller, this paper introduces a published patent-pending technique that has been adopted to realize the function in digital logic. The whole design is implemented with VHDL language. An on-line evaluation has been carried on SSRF (Shanghai Synchrotron Radiation Facility)storage ring. Results indicate that the system turn-by-turn data can measure the real beam movement accurately,and system resolution is 1.1μm. (authors)

  12. High-resolution imaging methods in array signal processing

    DEFF Research Database (Denmark)

    Xenaki, Angeliki

    in active sonar signal processing for detection and imaging of submerged oil contamination in sea water from a deep-water oil leak. The submerged oil _eld is modeled as a uid medium exhibiting spatial perturbations in the acoustic parameters from their mean ambient values which cause weak scattering...... of the incident acoustic energy. A highfrequency active sonar is selected to insonify the medium and receive the backscattered waves. High-frequency acoustic methods can both overcome the optical opacity of water (unlike methods based on electromagnetic waves) and resolve the small-scale structure...... of the submerged oil field (unlike low-frequency acoustic methods). The study shows that high-frequency acoustic methods are suitable not only for large-scale localization of the oil contamination in the water column but also for statistical characterization of the submerged oil field through inference...

  13. Data array acquisition and joint processing in local plasma spectroscopy

    International Nuclear Information System (INIS)

    Ekimov, K.; Luizova, L.; Soloviev, A.; Khakhaev, A.

    2005-01-01

    The setup and software for optical emission spectroscopy with spatial and temporal resolutions were developed. The automated installation includes LabView compatible instrument interfaces. The algorithm of joint data processing is based on principal component method and allows the increase in stability of results of the radial transform and the instrument distortion elimination in the presence of noises. The system is applied to diagnostics of the arc discharge in mercury vapors with the addition of thallium. The distributions of ground state and excited mercury atoms, excited thallium atoms and electron density over the arc cross section have been measured on the basis of analysis of spectral line shapes. The Saha balance between electron and high lying excited states densities was checked. An unexpected broadening of some thallium spectral lines was found out

  14. Numerical Simulation of the Diffusion Processes in Nanoelectrode Arrays Using an Axial Neighbor Symmetry Approximation.

    Science.gov (United States)

    Peinetti, Ana Sol; Gilardoni, Rodrigo S; Mizrahi, Martín; Requejo, Felix G; González, Graciela A; Battaglini, Fernando

    2016-06-07

    Nanoelectrode arrays have introduced a complete new battery of devices with fascinating electrocatalytic, sensitivity, and selectivity properties. To understand and predict the electrochemical response of these arrays, a theoretical framework is needed. Cyclic voltammetry is a well-fitted experimental technique to understand the undergoing diffusion and kinetics processes. Previous works describing microelectrode arrays have exploited the interelectrode distance to simulate its behavior as the summation of individual electrodes. This approach becomes limited when the size of the electrodes decreases to the nanometer scale due to their strong radial effect with the consequent overlapping of the diffusional fields. In this work, we present a computational model able to simulate the electrochemical behavior of arrays working either as the summation of individual electrodes or being affected by the overlapping of the diffusional fields without previous considerations. Our computational model relays in dividing a regular electrode array in cells. In each of them, there is a central electrode surrounded by neighbor electrodes; these neighbor electrodes are transformed in a ring maintaining the same active electrode area than the summation of the closest neighbor electrodes. Using this axial neighbor symmetry approximation, the problem acquires a cylindrical symmetry, being applicable to any diffusion pattern. The model is validated against micro- and nanoelectrode arrays showing its ability to predict their behavior and therefore to be used as a designing tool.

  15. Urban runoff (URO) process for MODFLOW 2005: simulation of sub-grid scale urban hydrologic processes in Broward County, FL

    Science.gov (United States)

    Decker, Jeremy D.; Hughes, J.D.

    2013-01-01

    Climate change and sea-level rise could cause substantial changes in urban runoff and flooding in low-lying coast landscapes. A major challenge for local government officials and decision makers is to translate the potential global effects of climate change into actionable and cost-effective adaptation and mitigation strategies at county and municipal scales. A MODFLOW process is used to represent sub-grid scale hydrology in urban settings to help address these issues. Coupled interception, surface water, depression, and unsaturated zone storage are represented. A two-dimensional diffusive wave approximation is used to represent overland flow. Three different options for representing infiltration and recharge are presented. Additional features include structure, barrier, and culvert flow between adjacent cells, specified stage boundaries, critical flow boundaries, source/sink surface-water terms, and the bi-directional runoff to MODFLOW Surface-Water Routing process. Some abilities of the Urban RunOff (URO) process are demonstrated with a synthetic problem using four land uses and varying cell coverages. Precipitation from a hypothetical storm was applied and cell by cell surface-water depth, groundwater level, infiltration rate, and groundwater recharge rate are shown. Results indicate the URO process has the ability to produce time-varying, water-content dependent infiltration and leakage, and successfully interacts with MODFLOW.

  16. Can the BestGrid Process Improve Stakeholder Involvement in Electricity Transmission Projects?

    Directory of Open Access Journals (Sweden)

    Nadejda Komendantova

    2015-08-01

    Full Text Available The European Union has set ambitious targets for deployment of renewable energy sources to reach goals of climate change mitigation and energy security policies. However, the current state of electricity transmission infrastructure is a major bottleneck for further scaling up of renewable energy in the EU. Several thousands of kilometers of new lines have to be constructed and upgraded to accommodate growing volumes of intermittent renewable electricity. In many countries, construction of electricity transmission projects has been delayed for several years due to concerns of local stakeholders. The innovative BESTGRID approach, reported here, brings together transmission system operators (TSOs and non-governmental organizations (NGOs to discuss and understand the nature of stakeholder concerns. This paper has three objectives: (1 to understand stakeholder concerns about the deployment of electricity transmission grids in four pilot projects according to five guiding principles: need, transparency, engagement, environment, and impacts on human health as well as benefits; (2 to understand how these principles can be addressed to provide a basis for better decision-making outcomes; and (3 to evaluate the BESTGRID process based on feedback received from stakeholders and the level of participation achieved according to the ladder of Arnstein. This paper goes beyond a discussion of “measures to mitigate opposition” to understand how dialogue between TSOs and the public—represented mainly by NGOs and policy-makers—might lead to a better decision-making process and more sustainable electricity transmission infrastructure deployment.

  17. Modeling and Implementing a Digitally Embedded Maximum Power Point Tracking Algorithm and a Series-Loaded Resonant DC-DC Converter to Integrate a Photovoltaic Array with a Micro-Grid

    Science.gov (United States)

    2014-09-01

    These renewable energy sources can include solar, wind, geothermal , biomass, hydroelectric, and nuclear. Of these sources, photovoltaic (PV) arrays...renewable energy source [1]. These renewable energy sources can include solar, wind, geothermal , biomass, hydroelectric, and nuclear. Of these sources...26, May 2011. [6] H. G. Xu, J. P. He, Y. Qin, and Y. H. Li, “Energy management and control strategy for DC micro-grid in data center,” China

  18. The MammoGrid Project Grids Architecture

    CERN Document Server

    McClatchey, Richard; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri; Buncic, Predrag; Clatchey, Richard Mc; Buncic, Predrag; Manset, David; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri

    2003-01-01

    The aim of the recently EU-funded MammoGrid project is, in the light of emerging Grid technology, to develop a European-wide database of mammograms that will be used to develop a set of important healthcare applications and investigate the potential of this Grid to support effective co-working between healthcare professionals throughout the EU. The MammoGrid consortium intends to use a Grid model to enable distributed computing that spans national borders. This Grid infrastructure will be used for deploying novel algorithms as software directly developed or enhanced within the project. Using the MammoGrid clinicians will be able to harness the use of massive amounts of medical image data to perform epidemiological studies, advanced image processing, radiographic education and ultimately, tele-diagnosis over communities of medical "virtual organisations". This is achieved through the use of Grid-compliant services [1] for managing (versions of) massively distributed files of mammograms, for handling the distri...

  19. Assessment of low-cost manufacturing process sequences. [photovoltaic solar arrays

    Science.gov (United States)

    Chamberlain, R. G.

    1979-01-01

    An extensive research and development activity to reduce the cost of manufacturing photovoltaic solar arrays by a factor of approximately one hundred is discussed. Proposed and actual manufacturing process descriptions were compared to manufacturing costs. An overview of this methodology is presented.

  20. Assessment of Measurement Distortions in GNSS Antenna Array Space-Time Processing

    Directory of Open Access Journals (Sweden)

    Thyagaraja Marathe

    2016-01-01

    Full Text Available Antenna array processing techniques are studied in GNSS as effective tools to mitigate interference in spatial and spatiotemporal domains. However, without specific considerations, the array processing results in biases and distortions in the cross-ambiguity function (CAF of the ranging codes. In space-time processing (STP the CAF misshaping can happen due to the combined effect of space-time processing and the unintentional signal attenuation by filtering. This paper focuses on characterizing these degradations for different controlled signal scenarios and for live data from an antenna array. The antenna array simulation method introduced in this paper enables one to perform accurate analyses in the field of STP. The effects of relative placement of the interference source with respect to the desired signal direction are shown using overall measurement errors and profile of the signal strength. Analyses of contributions from each source of distortion are conducted individually and collectively. Effects of distortions on GNSS pseudorange errors and position errors are compared for blind, semi-distortionless, and distortionless beamforming methods. The results from characterization can be useful for designing low distortion filters that are especially important for high accuracy GNSS applications in challenging environments.

  1. Solution processed bismuth sulfide nanowire array core/silver shuffle shell solar cells

    NARCIS (Netherlands)

    Cao, Y.; Bernechea, M.; Maclachlan, A.; Zardetto, V.; Creatore, M.; Haque, S.A.; Konstantatos, G.

    2015-01-01

    Low bandgap inorganic semiconductor nanowires have served as building blocks in solution processed solar cells to improve their power conversion capacity and reduce fabrication cost. In this work, we first reported bismuth sulfide nanowire arrays grown from colloidal seeds on a transparent

  2. Increasing the specificity and function of DNA microarrays by processing arrays at different stringencies

    DEFF Research Database (Denmark)

    Dufva, Martin; Petersen, Jesper; Poulsen, Lena

    2009-01-01

    DNA microarrays have for a decade been the only platform for genome-wide analysis and have provided a wealth of information about living organisms. DNA microarrays are processed today under one condition only, which puts large demands on assay development because all probes on the array need to f...

  3. Solution-processed single-wall carbon nanotube transistor arrays for wearable display backplanes

    Directory of Open Access Journals (Sweden)

    Byeong-Cheol Kang

    2018-01-01

    Full Text Available In this paper, we demonstrate solution-processed single-wall carbon nanotube thin-film transistor (SWCNT-TFT arrays with polymeric gate dielectrics on the polymeric substrates for wearable display backplanes, which can be directly attached to the human body. The optimized SWCNT-TFTs without any buffer layer on flexible substrates exhibit a linear field-effect mobility of 1.5cm2/V-s and a threshold voltage of around 0V. The statistical plot of the key device metrics extracted from 35 SWCNT-TFTs which were fabricated in different batches at different times conclusively support that we successfully demonstrated high-performance solution-processed SWCNT-TFT arrays which demand excellent uniformity in the device performance. We also investigate the operational stability of wearable SWCNT-TFT arrays against an applied strain of up to 40%, which is the essential for a harsh degree of strain on human body. We believe that the demonstration of flexible SWCNT-TFT arrays which were fabricated by all solution-process except the deposition of metal electrodes at process temperature below 130oC can open up new routes for wearable display backplanes.

  4. Astronomical Data Processing Using SciQL, an SQL Based Query Language for Array Data

    Science.gov (United States)

    Zhang, Y.; Scheers, B.; Kersten, M.; Ivanova, M.; Nes, N.

    2012-09-01

    SciQL (pronounced as ‘cycle’) is a novel SQL-based array query language for scientific applications with both tables and arrays as first class citizens. SciQL lowers the entrance fee of adopting relational DBMS (RDBMS) in scientific domains, because it includes functionality often only found in mathematics software packages. In this paper, we demonstrate the usefulness of SciQL for astronomical data processing using examples from the Transient Key Project of the LOFAR radio telescope. In particular, how the LOFAR light-curve database of all detected sources can be constructed, by correlating sources across the spatial, frequency, time and polarisation domains.

  5. Reaching for the cloud: on the lessons learned from grid computing technology transfer process to the biomedical community.

    Science.gov (United States)

    Mohammed, Yassene; Dickmann, Frank; Sax, Ulrich; von Voigt, Gabriele; Smith, Matthew; Rienhoff, Otto

    2010-01-01

    Natural scientists such as physicists pioneered the sharing of computing resources, which led to the creation of the Grid. The inter domain transfer process of this technology has hitherto been an intuitive process without in depth analysis. Some difficulties facing the life science community in this transfer can be understood using the Bozeman's "Effectiveness Model of Technology Transfer". Bozeman's and classical technology transfer approaches deal with technologies which have achieved certain stability. Grid and Cloud solutions are technologies, which are still in flux. We show how Grid computing creates new difficulties in the transfer process that are not considered in Bozeman's model. We show why the success of healthgrids should be measured by the qualified scientific human capital and the opportunities created, and not primarily by the market impact. We conclude with recommendations that can help improve the adoption of Grid and Cloud solutions into the biomedical community. These results give a more concise explanation of the difficulties many life science IT projects are facing in the late funding periods, and show leveraging steps that can help overcoming the "vale of tears".

  6. Hybrid Orbital and Numerical Grid Representationfor Electronic Continuum Processes: Double Photoionization of Atomic Beryllium

    Energy Technology Data Exchange (ETDEWEB)

    Yip, Frank L; McCurdy, C. William; Rescigno, Thomas N

    2010-04-19

    A general approach for ab initio calculations of electronic continuum processes is described in which the many-electron wave function is expanded using a combination of orbitals at short range and the finite-element discrete variable representation(FEM-DVR) at larger distances. The orbital portion of the basis allows the efficient construction of many-electron configurations in which some of the electrons are bound, but because the orbitals are constructed from an underlying FEM-DVR grid, the calculation of two-electron integrals retains the efficiency of the primitive FEM-DVR approach. As an example, double photoionization of beryllium is treated in a calculation in which the 1s{sup 2} core is frozen. This approach extends the use of exterior complex scaling (ECS) successfully applied to helium and H{sub 2} to calculations with two active electrons on more complicated targets. Integrated, energy-differential and triply-differential cross sections are exhibited, and the results agree well with other theoretical investigations.

  7. Comparison of Field Measurements and EMT Simulation Results on a Multi-Level STATCOM for Grid Integration of London Array Wind Power Plant

    DEFF Research Database (Denmark)

    Glasdam, Jakob; Kocewiak, Łukasz Hubert; Hjerrild, Jesper

    2014-01-01

    Simulation results are widely used in the design of electrical systems such as offshore wind power plants (OWPPs) and for determination of grid compliance. Measurements constitute an important part in the evaluation process of the OWPP, including passive and active components such as the static...... of the STATCOM for wind power integration, as well as of the validity of applying a generic model of the STATCOM without knowledge of the actual implemented control system. The proposed model is integrated into an aggregated EMT model of LAOWPP, which will be used to investigate possible resonance phenomena...... that will be shown in the paper to affect the harmonic distortion level. The STATCOM distortion level will be shown to be highly affected by the number of wind turbine generators (WTGs) in service. It will be shown that the inclusion of band rejection filters (BRFs) in the WTGs’ control loop lowers the STATCOM...

  8. Frequency Diverse Array Radar Signal Processing via Space-Range-Doppler Focus (SRDF Method

    Directory of Open Access Journals (Sweden)

    Chen Xiaolong

    2018-04-01

    Full Text Available To meet the urgent demand of low-observable moving target detection in complex environments, a novel method of Frequency Diverse Array (FDA radar signal processing method based on Space-Rang-Doppler Focusing (SRDF is proposed in this paper. The current development status of the FDA radar, the design of the array structure, beamforming, and joint estimation of distance and angle are systematically reviewed. The extra degrees of freedom provided by FDA radar are fully utilizsed, which include the Degrees Of Freedom (DOFs of the transmitted waveform, the location of array elements, correlation of beam azimuth and distance, and the long dwell time, which are also the DOFs in joint spatial (angle, distance, and frequency (Doppler dimensions. Simulation results show that the proposed method has the potential of improving target detection and parameter estimation for weak moving targets in complex environments and has broad application prospects in clutter and interference suppression, moving target refinement, etc..

  9. Processing and display of three-dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Antoine, M.; Darier, P.

    1986-04-01

    The analysis of three-dimensional (3-D) arrays of numerical data from medical, industrial or scientific imaging, by synthetic generation of realistic images, has been widely developed. The Octree encoding, that organizes the volume data in a hierarchical tree structure, has some interesting features for 3-D arrays of data processing. The Octree encoding method, based on the recursive subdivision of a 3-D array, is an extension of the Quadtree encoding in the two-dimensional plane. We have developed a software package to validate the basic Octree encoding methodology for some manipulation and display operations of volume data. The contribution introduces the technique we have used (called ''overlay technique'') to make the projection operation of an Octree on a Quadtree encoded image plane. The application of this technique to the hidden surface display is presented [fr

  10. Seismic array processing and computational infrastructure for improved monitoring of Alaskan and Aleutian seismicity and volcanoes

    Science.gov (United States)

    Lindquist, Kent Gordon

    We constructed a near-real-time system, called Iceworm, to automate seismic data collection, processing, storage, and distribution at the Alaska Earthquake Information Center (AEIC). Phase-picking, phase association, and interprocess communication components come from Earthworm (U.S. Geological Survey). A new generic, internal format for digital data supports unified handling of data from diverse sources. A new infrastructure for applying processing algorithms to near-real-time data streams supports automated information extraction from seismic wavefields. Integration of Datascope (U. of Colorado) provides relational database management of all automated measurements, parametric information for located hypocenters, and waveform data from Iceworm. Data from 1997 yield 329 earthquakes located by both Iceworm and the AEIC. Of these, 203 have location residuals under 22 km, sufficient for hazard response. Regionalized inversions for local magnitude in Alaska yield Msb{L} calibration curves (logAsb0) that differ from the Californian Richter magnitude. The new curve is 0.2\\ Msb{L} units more attenuative than the Californian curve at 400 km for earthquakes north of the Denali fault. South of the fault, and for a region north of Cook Inlet, the difference is 0.4\\ Msb{L}. A curve for deep events differs by 0.6\\ Msb{L} at 650 km. We expand geographic coverage of Alaskan regional seismic monitoring to the Aleutians, the Bering Sea, and the entire Arctic by initiating the processing of four short-period, Alaskan seismic arrays. To show the array stations' sensitivity, we detect and locate two microearthquakes that were missed by the AEIC. An empirical study of the location sensitivity of the arrays predicts improvements over the Alaskan regional network that are shown as map-view contour plots. We verify these predictions by detecting an Msb{L} 3.2 event near Unimak Island with one array. The detection and location of four representative earthquakes illustrates the expansion

  11. Global Precipitation Measurement (GPM) Mission: Precipitation Processing System (PPS) GPM Mission Gridded Text Products Provide Surface Precipitation Retrievals

    Science.gov (United States)

    Stocker, Erich Franz; Kelley, O.; Kummerow, C.; Huffman, G.; Olson, W.; Kwiatkowski, J.

    2015-01-01

    In February 2015, the Global Precipitation Measurement (GPM) mission core satellite will complete its first year in space. The core satellite carries a conically scanning microwave imager called the GPM Microwave Imager (GMI), which also has 166 GHz and 183 GHz frequency channels. The GPM core satellite also carries a dual frequency radar (DPR) which operates at Ku frequency, similar to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar, and a new Ka frequency. The precipitation processing system (PPS) is producing swath-based instantaneous precipitation retrievals from GMI, both radars including a dual-frequency product, and a combined GMIDPR precipitation retrieval. These level 2 products are written in the HDF5 format and have many additional parameters beyond surface precipitation that are organized into appropriate groups. While these retrieval algorithms were developed prior to launch and are not optimal, these algorithms are producing very creditable retrievals. It is appropriate for a wide group of users to have access to the GPM retrievals. However, for researchers requiring only surface precipitation, these L2 swath products can appear to be very intimidating and they certainly do contain many more variables than the average researcher needs. Some researchers desire only surface retrievals stored in a simple easily accessible format. In response, PPS has begun to produce gridded text based products that contain just the most widely used variables for each instrument (surface rainfall rate, fraction liquid, fraction convective) in a single line for each grid box that contains one or more observations.This paper will describe the gridded data products that are being produced and provide an overview of their content. Currently two types of gridded products are being produced: (1) surface precipitation retrievals from the core satellite instruments GMI, DPR, and combined GMIDPR (2) surface precipitation retrievals for the partner constellation

  12. Structural control of ultra-fine CoPt nanodot arrays via electrodeposition process

    Energy Technology Data Exchange (ETDEWEB)

    Wodarz, Siggi [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan); Hasegawa, Takashi; Ishio, Shunji [Department of Materials Science, Akita University, Akita City 010-8502 (Japan); Homma, Takayuki, E-mail: t.homma@waseda.jp [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan)

    2017-05-15

    CoPt nanodot arrays were fabricated by combining electrodeposition and electron beam lithography (EBL) for the use of bit-patterned media (BPM). To achieve precise control of deposition uniformity and coercivity of the CoPt nanodot arrays, their crystal structure and magnetic properties were controlled by controlling the diffusion state of metal ions from the initial deposition stage with the application of bath agitation. Following bath agitation, the composition gradient of the CoPt alloy with thickness was mitigated to have a near-ideal alloy composition of Co:Pt =80:20, which induces epitaxial-like growth from Ru substrate, thus resulting in the improvement of the crystal orientation of the hcp (002) structure from its initial deposition stages. Furthermore, the cross-sectional transmission electron microscope (TEM) analysis of the nanodots deposited with bath agitation showed CoPt growth along its c-axis oriented in the perpendicular direction, having uniform lattice fringes on the hcp (002) plane from the Ru underlayer interface, which is a significant factor to induce perpendicular magnetic anisotropy. Magnetic characterization of the CoPt nanodot arrays showed increase in the perpendicular coercivity and squareness of the hysteresis loops from 2.0 kOe and 0.64 (without agitation) to 4.0 kOe and 0.87 with bath agitation. Based on the detailed characterization of nanodot arrays, the precise crystal structure control of the nanodot arrays with ultra-high recording density by electrochemical process was successfully demonstrated. - Highlights: • Ultra-fine CoPt nanodot arrays were fabricated by electrodeposition. • Crystallinity of hcp (002) was improved with uniform composition formation. • Uniform formation of hcp lattices leads to an increase in the coercivity.

  13. MICROARRAY IMAGE GRIDDING USING GRID LINE REFINEMENT TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-05-01

    Full Text Available An important stage in microarray image analysis is gridding. Microarray image gridding is done to locate sub arrays in a microarray image and find co-ordinates of spots within each sub array. For accurate identification of spots, most of the proposed gridding methods require human intervention. In this paper a fully automatic gridding method which enhances spot intensity in the preprocessing step as per a histogram based threshold method is used. The gridding step finds co-ordinates of spots from horizontal and vertical profile of the image. To correct errors due to the grid line placement, a grid line refinement technique is proposed. The algorithm is applied on different image databases and results are compared based on spot detection accuracy and time. An average spot detection accuracy of 95.06% depicts the proposed method’s flexibility and accuracy in finding the spot co-ordinates for different database images.

  14. Processing and display of medical three dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Darier, P.

    1985-01-01

    Imaging modalities such as X-ray computerized Tomography (CT), Nuclear Medicine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging. We are currently developing experimental software that allows the analysis, processing and display of 3-D arrays of numerical data that are organized in a related hierarchical data structure using OCTREE (octal-tree) encoding technique based on a recursive subdivision of the data volume. The OCTREE encoding structure is an extension of the two-dimensional tree structure: the quadtree, developed for image processing applications. Before any operations, the 3-D array of data is OCTREE encoded, thereafter all processings are concerned with the encoded object. The elementary process for the elaboration of a synthetic image includes: conditioning the volume: volume partition (numerical and spatial segmentation), choice of the view-point..., two dimensional display, either by spatial integration (radiography) or by shaded surface representation. This paper introduces these different concepts and specifies the advantages of OCTREE encoding techniques in realizing these operations. Furthermore the application of the OCTREE encoding scheme to the display of 3-D medical volumes generated from multiple CT scans is presented

  15. NeuroSeek dual-color image processing infrared focal plane array

    Science.gov (United States)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  16. Effect of Source, Surfactant, and Deposition Process on Electronic Properties of Nanotube Arrays

    Directory of Open Access Journals (Sweden)

    Dheeraj Jain

    2011-01-01

    Full Text Available The electronic properties of arrays of carbon nanotubes from several different sources differing in the manufacturing process used with a variety of average properties such as length, diameter, and chirality are studied. We used several common surfactants to disperse each of these nanotubes and then deposited them on Si wafers from their aqueous solutions using dielectrophoresis. Transport measurements were performed to compare and determine the effect of different surfactants, deposition processes, and synthesis processes on nanotubes synthesized using CVD, CoMoCAT, laser ablation, and HiPCO.

  17. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    Science.gov (United States)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  18. Lightweight solar array blanket tooling, laser welding and cover process technology

    Science.gov (United States)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  19. Network control stations in the smart grid. Process and information knots for business intelligence applications; Netzleitstellen im Smart Grid. Prozess- und Informationsknoten fuer Business Intelligence Applikationen

    Energy Technology Data Exchange (ETDEWEB)

    Kautsch, Stephan; Kroll, Meinhard [ABB AG, Mannheim (Germany); Schoellhorn, Daniel [EnBW Regional AG, Stuttgart (Germany)

    2012-07-01

    The degree of automation in the distribution will increase, whereas a more extensive monitoring is possible. Smart metering in the local network station replaces the drag pointers. This allows the pre-determined load flows to be precise and it can be determined and valuable data can be collected about how resources, for example the transformers in the secondary substations, are actually utilized. The amount of information available is increasing steadily, not least because of the increasing expansion of smart meters, that also provide valuable information for the operation of the distribution networks. This ''flood'' of data that is processed by the system, filtered, and analyzed must be prepared for the user in order to make sense, but can also be used to support and optimize many business processes. Although these tasks mentioned are usually not yet allocated within the grid operator organization, they offer themselves to be placed close to the network control centers as they propose new challenges but also opportunities. (orig.)

  20. Monitoring and Evaluation of Alcoholic Fermentation Processes Using a Chemocapacitor Sensor Array

    Science.gov (United States)

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-01-01

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument. PMID:25184490

  1. An array of virtual Frisch-grid CdZnTe detectors and a front-end application-specific integrated circuit for large-area position-sensitive gamma-ray cameras

    Energy Technology Data Exchange (ETDEWEB)

    Bolotnikov, A. E., E-mail: bolotnik@bnl.gov; Ackley, K.; Camarda, G. S.; Cherches, C.; Cui, Y.; De Geronimo, G.; Fried, J.; Hossain, A.; Mahler, G.; Maritato, M.; Roy, U.; Salwen, C.; Vernon, E.; Yang, G.; James, R. B. [Brookhaven National Laboratory, Upton, New York 11793 (United States); Hodges, D. [University of Texas at El Paso, El Paso, Texas 79968 (United States); Lee, W. [Korea University, Seoul 136-855 (Korea, Republic of); Petryk, M. [SUNY Binghamton, Vestal, New York 13902 (United States)

    2015-07-15

    We developed a robust and low-cost array of virtual Frisch-grid CdZnTe detectors coupled to a front-end readout application-specific integrated circuit (ASIC) for spectroscopy and imaging of gamma rays. The array operates as a self-reliant detector module. It is comprised of 36 close-packed 6 × 6 × 15 mm{sup 3} detectors grouped into 3 × 3 sub-arrays of 2 × 2 detectors with the common cathodes. The front-end analog ASIC accommodates up to 36 anode and 9 cathode inputs. Several detector modules can be integrated into a single- or multi-layer unit operating as a Compton or a coded-aperture camera. We present the results from testing two fully assembled modules and readout electronics. The further enhancement of the arrays’ performance and reduction of their cost are possible by using position-sensitive virtual Frisch-grid detectors, which allow for accurate corrections of the response of material non-uniformities caused by crystal defects.

  2. A new smart micro grid process control strategy : the human leading the thermal comfort control

    NARCIS (Netherlands)

    Zeiler, W.; Vissers, D.R.; Boxem, G.

    2013-01-01

    There is a clear need for more sustainable solutions to provide energy within the built environment. A Smart Electrical Energy supplyGrid is being developed by the major Electricity distribution companies to cope with fluctuations in energy generation from the different renewableenergy sources. To

  3. Methods for the Optimal Design of Grid-Connected PV Inverters

    DEFF Research Database (Denmark)

    Koutroulis, Eftichios; Blaabjerg, Frede

    2011-01-01

    and the efficient processing of this power by the DC/AC inverter. In this paper two new methods are presented for the optimal design of a PV inverter power section, output filter and MPPT control strategy. The influences of the electric grid regulations and standards as well as the PV array operational......The DC/AC inverters are used in grid-connected PV energy production systems as the power processing interface between the PV energy source and the electric grid. The energy injected into the electric grid by the PV installation depends on the amount of power extracted from the PV power source...

  4. Chimera Grid Tools

    Science.gov (United States)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  5. High density processing electronics for superconducting tunnel junction x-ray detector arrays

    Energy Technology Data Exchange (ETDEWEB)

    Warburton, W.K., E-mail: bill@xia.com [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Harris, J.T. [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Friedrich, S. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)

    2015-06-01

    Superconducting tunnel junctions (STJs) are excellent soft x-ray (100–2000 eV) detectors, particularly for synchrotron applications, because of their ability to obtain energy resolutions below 10 eV at count rates approaching 10 kcps. In order to achieve useful solid detection angles with these very small detectors, they are typically deployed in large arrays – currently with 100+ elements, but with 1000 elements being contemplated. In this paper we review a 5-year effort to develop compact, computer controlled low-noise processing electronics for STJ detector arrays, focusing on the major issues encountered and our solutions to them. Of particular interest are our preamplifier design, which can set the STJ operating points under computer control and achieve 2.7 eV energy resolution; our low noise power supply, which produces only 2 nV/√Hz noise at the preamplifier's critical cascode node; our digital processing card that digitizes and digitally processes 32 channels; and an STJ I–V curve scanning algorithm that computes noise as a function of offset voltage, allowing an optimum operating point to be easily selected. With 32 preamplifiers laid out on a custom 3U EuroCard, and the 32 channel digital card in a 3U PXI card format, electronics for a 128 channel array occupy only two small chassis, each the size of a National Instruments 5-slot PXI crate, and allow full array control with simple extensions of existing beam line data collection packages.

  6. Power grids

    International Nuclear Information System (INIS)

    Viterbo, J.

    2012-01-01

    The implementation of renewable energies represents new challenges for electrical systems. The objective: making power grids smarter so they can handle intermittent production. The advent of smart grids will allow flexible operations like distributing energy in a multidirectional manner instead of just one way and it will make electrical systems capable of integrating actions by different users, consumers and producers in order to maintain efficient, sustainable, economical and secure power supplies. Practically speaking, they associate sensors, instrumentation and controls with information processing and communication systems in order to create massively automated networks. Smart grids require huge investments: for example more than 7 billion dollars have been invested in China and in the Usa in 2010 and France is ranked 9. worldwide with 265 million dollars invested. It is expected that smart grids will promote the development of new business models and a change in the value chain for energy. Decentralized production combined with the probable introduction of more or less flexible rates for sales or purchases and of new supplier-customer relationships will open the way to the creation of new businesses. (A.C.)

  7. Phased arrays techniques and split spectrum processing for inspection of thick titanium casting components

    International Nuclear Information System (INIS)

    Banchet, J.; Chahbaz, A.; Sicard, R.; Zellouf, D.E.

    2003-01-01

    In aircraft structures, titanium parts and engine members are critical structural components, and their inspection crucial. However, these structures are very difficult to inspect ultrasonically because of their large grain structure that increases noise drastically. In this work, phased array inspection setups were developed to detected small defects such as simulated inclusions and porosity contained in thick titanium casting blocks, which are frequently used in the aerospace industry. A Cut Spectrum Processing (CSP)-based algorithm was then implemented on the acquired data by employing a set of parallel bandpass filters with different center frequencies. This process led in substantial improvement of the signal to noise ratio and thus, of detectability

  8. Post-processing Free Quantum Random Number Generator Based on Avalanche Photodiode Array

    International Nuclear Information System (INIS)

    Li Yang; Liao Sheng-Kai; Liang Fu-Tian; Shen Qi; Liang Hao; Peng Cheng-Zhi

    2016-01-01

    Quantum random number generators adopting single photon detection have been restricted due to the non-negligible dead time of avalanche photodiodes (APDs). We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32 × 32 APD array is up to tens of Gbits/s. (paper)

  9. Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes

    Science.gov (United States)

    Huang, Shaoming

    2003-06-01

    An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.

  10. Fabrication of Aligned Polyaniline Nanofiber Array via a Facile Wet Chemical Process.

    Science.gov (United States)

    Sun, Qunhui; Bi, Wu; Fuller, Thomas F; Ding, Yong; Deng, Yulin

    2009-06-17

    In this work, we demonstrate for the first time a template free approach to synthesize aligned polyaniline nanofiber (PN) array on a passivated gold (Au) substrate via a facile wet chemical process. The Au surface was first modified using 4-aminothiophenol (4-ATP) to afford the surface functionality, followed subsequently by an oxidation polymerization of aniline (AN) monomer in an aqueous medium using ammonium persulfate as the oxidant and tartaric acid as the doping agent. The results show that a vertically aligned PANI nanofiber array with individual fiber diameters of ca. 100 nm, heights of ca. 600 nm and a packing density of ca. 40 pieces·µm(-2) , was synthesized. Copyright © 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Arrays of surface-normal electroabsorption modulators for the generation and signal processing of microwave photonics signals

    NARCIS (Netherlands)

    Noharet, Bertrand; Wang, Qin; Platt, Duncan; Junique, Stéphane; Marpaung, D.A.I.; Roeloffzen, C.G.H.

    2011-01-01

    The development of an array of 16 surface-normal electroabsorption modulators operating at 1550nm is presented. The modulator array is dedicated to the generation and processing of microwave photonics signals, targeting a modulation bandwidth in excess of 5GHz. The hybrid integration of the

  12. Distributive On-line Processing, Visualization and Analysis System for Gridded Remote Sensing Data

    Science.gov (United States)

    Leptoukh, G.; Berrick, S.; Liu, Z.; Pham, L.; Rui, H.; Shen, S.; Teng, W.; Zhu, T.

    2004-01-01

    The ability to use data stored in the current Earth Observing System (EOS) archives for studying regional or global phenomena is highly dependent on having a detailed understanding of the data's internal structure and physical implementation. Gaining this understanding and applying it to data reduction is a time- consuming task that must be undertaken before the core investigation can begin. This is an especially difficult challenge when science objectives require users to deal with large multi-sensor data sets that are usually of different formats, structures, and resolutions, for example, when preparing data for input into modeling systems. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) has taken a major step towards meeting this challenge by developing an infrastructure with a Web interface that allows users to perform interactive analysis online without downloading any data, the GES-DISC Interactive Online Visualization and Analysis Infrastructure or "Giovanni." Giovanni provides interactive, online, analysis tools for data users to facilitate their research. There have been several instances of this interface created to serve TRMM users, Aerosol scientists, Ocean Color and Agriculture applications users. The first generation of these tools support gridded data only. The user selects geophysical parameters, area of interest, time period; and the system generates an output on screen in a matter of seconds. The currently available output options are: Area plot averaged or accumulated over any available data period for any rectangular area; Time plot time series averaged over any rectangular area; Time plots image view of any longitude-time and latitude-time cross sections; ASCII output for all plot types; Image animation for area plot. In the future, we will add correlation plots, GIS-compatible outputs, etc. This allow user to focus on data content (i.e. science parameters) and eliminate the need for expensive learning

  13. Law guide for photovoltaic installations: Array installation; Connecting the grid; Financing; The new legal regime implemented in 2011; Is green taxing still so green?; Which judge will bring the light?: Reactions of actors

    International Nuclear Information System (INIS)

    Ferracci, Vanina; Vandervorst, Alain; Tixier, Jean-Luc; Barthelemy, Christophe; Cloche-Dubois, Celine; Tenailleau, Francois; Rubio, Aurore-Emmanuelle; Pechamat, Olivier; Gandet, Stephanie; Deharbe, David; Rousset, Alain; Boedec, Morgan; Joffre, Andre; Blosseville, Thomas; Meunier, Stephane; Maincent, Guillaume

    2011-01-01

    The authors discuss legal issues and aspects regarding photovoltaic installations in France: the array installation (constraints related to urban planning: rules, authorizations and competencies when setting up on the ground or on buildings, urban taxes, estate issues), the connection to the grid, the financing (electricity prices, partnership contract), the new legal regime implemented in 2011, the question whether green taxing is still sufficiently attractive, the dispute about the mandatory purchase mechanism, and the attitude of the different actors (notably local communities, and industries) in front of the decrease of purchase prices

  14. Understanding and Mastering Dynamics in Computing Grids Processing Moldable Tasks with User-Level Overlay

    CERN Document Server

    Moscicki, Jakub Tomasz

    Scientic communities are using a growing number of distributed systems, from lo- cal batch systems, community-specic services and supercomputers to general-purpose, global grid infrastructures. Increasing the research capabilities for science is the raison d'^etre of such infrastructures which provide access to diversied computational, storage and data resources at large scales. Grids are rather chaotic, highly heterogeneous, de- centralized systems where unpredictable workloads, component failures and variability of execution environments are commonplace. Understanding and mastering the hetero- geneity and dynamics of such distributed systems is prohibitive for end users if they are not supported by appropriate methods and tools. The time cost to learn and use the interfaces and idiosyncrasies of dierent distributed environments is another challenge. Obtaining more reliable application execution times and boosting parallel speedup are important to increase the research capabilities of scientic communities. L...

  15. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    International Nuclear Information System (INIS)

    Tu, K T; Chung, C K

    2016-01-01

    An integrated technology of CO 2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO 2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO 2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO 2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold. (paper)

  16. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    Science.gov (United States)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  17. Research and Application of Auxiliary Optimization Technology of Power Grid Accident Processing Based on the Mode of Regulation and Control Integration

    Directory of Open Access Journals (Sweden)

    Cui Houzhen

    2015-01-01

    Full Text Available Accident processing is the most important link of the scheduling of daily monitoring. The improvement of intelligent level is of great significance for improving the efficiency of accident processing scheduling, shortening the time of accident processing and preventing further deterioration of accidents. According to features of accident processing scheduling, this paper puts forward an integrated framework of aid decision-making of online accident processing based on large power grid, and carries out a study from five aspects, namely integrated information support platform, risk perception in advance, online fault diagnosis, aid decision-making afterwards and visual display, so as to conduct real-time tracking on operating state of power grid, eliminate potential safety hazards of power grid and upgrade power grid from “manual analysis” scheduling to “intelligent analysis” scheduling.

  18. Distributive Online Processing, Visualization and Analysis System for Gridded Remote Sensing Data

    Science.gov (United States)

    Leptoukh, G.; Liu, Z.; Pham, L.; Rui, H.; Shen, S.; Teng, W.; Zhu, T.

    2004-12-01

    The ability to use data stored in the current Earth Observing System (EOS) archives for studying regional or global phenomena is highly dependent on having a detailed understanding of the data's internal structure and physical implementation. Gaining this understanding and applying it to data reduction is a time-consuming task that must be undertaken before the core investigation can begin. This is an especially difficult challenge when science objectives require users to deal with large multi-sensor data sets that are usually of different formats, structures, and resolutions. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) has taken a major step towards meeting this challenge by developing an infrastructure with a Web interface that allows users to perform interactive analysis online without downloading any data, the GES-DISC Interactive Online Visualization and Analysis Infrastructure or "Giovanni." Giovanni provides interactive, online, analysis tools for data users to facilitate their research. There have been several instances of this interface created to serve TRMM users, Aerosol scientists, Ocean Color and Agriculture applications users. The first generation of these tools support gridded data only. The user selects geophysical parameters, area of interest, time period; and the system generates an output on screen in a matter of seconds. The currently available output options are: Area plot averaged or accumulated over any available data period for any rectangular area; Time plot time series averaged over any rectangular area; Hovmoller plots image view of any longitude-time and latitude-time cross sections; ASCII output for all plot types; Image animation for area plot. In the future, correlation plots, GIS-compatible outputs, etc. This allow user to focus on data content (i.e. science parameters) and eliminate the need for expensive learning, development and processing tasks that are redundantly incurred by an archive's user

  19. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  20. Smart grid

    International Nuclear Information System (INIS)

    Choi, Dong Bae

    2001-11-01

    This book describes press smart grid from basics to recent trend. It is divided into ten chapters, which deals with smart grid as green revolution in energy with introduction, history, the fields, application and needed technique for smart grid, Trend of smart grid in foreign such as a model business of smart grid in foreign, policy for smart grid in U.S.A, Trend of smart grid in domestic with international standard of smart grid and strategy and rood map, smart power grid as infrastructure of smart business with EMS development, SAS, SCADA, DAS and PQMS, smart grid for smart consumer, smart renewable like Desertec project, convergence IT with network and PLC, application of an electric car, smart electro service for realtime of electrical pricing system, arrangement of smart grid.

  1. Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2011-09-01

    Full Text Available Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms.

  2. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    Science.gov (United States)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  3. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    International Nuclear Information System (INIS)

    Koohbor, M.; Soltanian, S.; Najafi, M.; Servati, P.

    2012-01-01

    Highlights: ► Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. ► Increasing the Zn concentration significantly reduces the Hc value of NWs. ► Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. ► The pH of electrolyte has no significant effect on the properties of the NW arrays. ► Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co 1−x Zn x (0 ≤ x ≤ 0.74) nanowires (NWs) with diameters of ∼35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 °C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on the magnetic properties of the NW arrays. The changes in magnetic property of NWs are rooted in a competition between shape anisotropy and

  4. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    Energy Technology Data Exchange (ETDEWEB)

    Koohbor, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Soltanian, S., E-mail: s.soltanian@gmail.com [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada); Najafi, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Physics, Hamadan University of Technology, Hamadan (Iran, Islamic Republic of); Servati, P. [Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada)

    2012-01-05

    Highlights: Black-Right-Pointing-Pointer Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. Black-Right-Pointing-Pointer Increasing the Zn concentration significantly reduces the Hc value of NWs. Black-Right-Pointing-Pointer Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. Black-Right-Pointing-Pointer The pH of electrolyte has no significant effect on the properties of the NW arrays. Black-Right-Pointing-Pointer Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co{sub 1-x}Zn{sub x} (0 {<=} x {<=} 0.74) nanowires (NWs) with diameters of {approx}35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 Degree-Sign C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on

  5. High-throughput fabrication of micrometer-sized compound parabolic mirror arrays by using parallel laser direct-write processing

    International Nuclear Information System (INIS)

    Yan, Wensheng; Gu, Min; Cumming, Benjamin P

    2015-01-01

    Micrometer-sized parabolic mirror arrays have significant applications in both light emitting diodes and solar cells. However, low fabrication throughput has been identified as major obstacle for the mirror arrays towards large-scale applications due to the serial nature of the conventional method. Here, the mirror arrays are fabricated by using a parallel laser direct-write processing, which addresses this barrier. In addition, it is demonstrated that the parallel writing is able to fabricate complex arrays besides simple arrays and thus offers wider applications. Optical measurements show that each single mirror confines the full-width at half-maximum value to as small as 17.8 μm at the height of 150 μm whilst providing a transmittance of up to 68.3% at a wavelength of 633 nm in good agreement with the calculation values. (paper)

  6. Permafrost sub-grid heterogeneity of soil properties key for 3-D soil processes and future climate projections

    Directory of Open Access Journals (Sweden)

    Christian Beer

    2016-08-01

    Full Text Available There are massive carbon stocks stored in permafrost-affected soils due to the 3-D soil movement process called cryoturbation. For a reliable projection of the past, recent and future Arctic carbon balance, and hence climate, a reliable concept for representing cryoturbation in a land surface model (LSM is required. The basis of the underlying transport processes is pedon-scale heterogeneity of soil hydrological and thermal properties as well as insulating layers, such as snow and vegetation. Today we still lack a concept of how to reliably represent pedon-scale properties and processes in a LSM. One possibility could be a statistical approach. This perspective paper demonstrates the importance of sub-grid heterogeneity in permafrost soils as a pre-requisite to implement any lateral transport parametrization. Representing such heterogeneity at the sub-pixel size of a LSM is the next logical step of model advancements. As a result of a theoretical experiment, heterogeneity of thermal and hydrological soil properties alone lead to a remarkable initial sub-grid range of subsoil temperature of 2 deg C, and active-layer thickness of 150 cm in East Siberia. These results show the way forward in representing combined lateral and vertical transport of water and soil in LSMs.

  7. Near-Body Grid Adaption for Overset Grids

    Science.gov (United States)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  8. Fabricating process of hollow out-of-plane Ni microneedle arrays and properties of the integrated microfluidic device

    Science.gov (United States)

    Zhu, Jun; Cao, Ying; Wang, Hong; Li, Yigui; Chen, Xiang; Chen, Di

    2013-07-01

    Although microfluidic devices that integrate microfluidic chips with hollow out-of-plane microneedle arrays have many advantages in transdermal drug delivery applications, difficulties exist in their fabrication due to the special three-dimensional structures of hollow out-of-plane microneedles. A new, cost-effective process for the fabrication of a hollow out-of-plane Ni microneedle array is presented. The integration of PDMS microchips with the Ni hollow microneedle array and the properties of microfluidic devices are also presented. The integrated microfluidic devices provide a new approach for transdermal drug delivery.

  9. Safe Grid

    Science.gov (United States)

    Chow, Edward T.; Stewart, Helen; Korsmeyer, David (Technical Monitor)

    2003-01-01

    The biggest users of GRID technologies came from the science and technology communities. These consist of government, industry and academia (national and international). The NASA GRID is moving into a higher technology readiness level (TRL) today; and as a joint effort among these leaders within government, academia, and industry, the NASA GRID plans to extend availability to enable scientists and engineers across these geographical boundaries collaborate to solve important problems facing the world in the 21 st century. In order to enable NASA programs and missions to use IPG resources for program and mission design, the IPG capabilities needs to be accessible from inside the NASA center networks. However, because different NASA centers maintain different security domains, the GRID penetration across different firewalls is a concern for center security people. This is the reason why some IPG resources are been separated from the NASA center network. Also, because of the center network security and ITAR concerns, the NASA IPG resource owner may not have full control over who can access remotely from outside the NASA center. In order to obtain organizational approval for secured remote access, the IPG infrastructure needs to be adapted to work with the NASA business process. Improvements need to be made before the IPG can be used for NASA program and mission development. The Secured Advanced Federated Environment (SAFE) technology is designed to provide federated security across NASA center and NASA partner's security domains. Instead of one giant center firewall which can be difficult to modify for different GRID applications, the SAFE "micro security domain" provide large number of professionally managed "micro firewalls" that can allow NASA centers to accept remote IPG access without the worry of damaging other center resources. The SAFE policy-driven capability-based federated security mechanism can enable joint organizational and resource owner approved remote

  10. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    Science.gov (United States)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  11. Compressive sensing-based electrostatic sensor array signal processing and exhausted abnormal debris detecting

    Science.gov (United States)

    Tang, Xin; Chen, Zhongsheng; Li, Yue; Yang, Yongmin

    2018-05-01

    When faults happen at gas path components of gas turbines, some sparsely-distributed and charged debris will be generated and released into the exhaust gas. The debris is called abnormal debris. Electrostatic sensors can detect the debris online and further indicate the faults. It is generally considered that, under a specific working condition, a more serious fault generates more and larger debris, and a piece of larger debris carries more charge. Therefore, the amount and charge of the abnormal debris are important indicators of the fault severity. However, because an electrostatic sensor can only detect the superposed effect on the electrostatic field of all the debris, it can hardly identify the amount and position of the debris. Moreover, because signals of electrostatic sensors depend on not only charge but also position of debris, and the position information is difficult to acquire, measuring debris charge accurately using the electrostatic detecting method is still a technical difficulty. To solve these problems, a hemisphere-shaped electrostatic sensors' circular array (HSESCA) is used, and an array signal processing method based on compressive sensing (CS) is proposed in this paper. To research in a theoretical framework of CS, the measurement model of the HSESCA is discretized into a sparse representation form by meshing. In this way, the amount and charge of the abnormal debris are described as a sparse vector. It is further reconstructed by constraining l1-norm when solving an underdetermined equation. In addition, a pre-processing method based on singular value decomposition and a result calibration method based on weighted-centroid algorithm are applied to ensure the accuracy of the reconstruction. The proposed method is validated by both numerical simulations and experiments. Reconstruction errors, characteristics of the results and some related factors are discussed.

  12. Remote online process measurements by a fiber optic diode array spectrometer

    International Nuclear Information System (INIS)

    Van Hare, D.R.; Prather, W.S.; O'Rourke, P.E.

    1986-01-01

    The development of remote online monitors for radioactive process streams is an active research area at the Savannah River Laboratory (SRL). A remote offline spectrophotometric measurement system has been developed and used at the Savannah River Plant (SRP) for the past year to determine the plutonium concentration of process solution samples. The system consists of a commercial diode array spectrophotometer modified with fiber optic cables that allow the instrument to be located remotely from the measurement cell. Recently, a fiber optic multiplexer has been developed for this instrument, which allows online monitoring of five locations sequentially. The multiplexer uses a motorized micrometer to drive one of five sets of optical fibers into the optical path of the instrument. A sixth optical fiber is used as an external reference and eliminates the need to flush out process lines to re-reference the spectrophotometer. The fiber optic multiplexer has been installed in a process prototype facility to monitor uranium loading and breakthrough of ion exchange columns. The design of the fiber optic multiplexer is discussed and data from the prototype facility are presented to demonstrate the capabilities of the measurement system

  13. Challenges facing production grids

    Energy Technology Data Exchange (ETDEWEB)

    Pordes, Ruth; /Fermilab

    2007-06-01

    Today's global communities of users expect quality of service from distributed Grid systems equivalent to that their local data centers. This must be coupled to ubiquitous access to the ensemble of processing and storage resources across multiple Grid infrastructures. We are still facing significant challenges in meeting these expectations, especially in the underlying security, a sustainable and successful economic model, and smoothing the boundaries between administrative and technical domains. Using the Open Science Grid as an example, I examine the status and challenges of Grids operating in production today.

  14. Continuous catchment-scale monitoring of geomorphic processes with a 2-D seismological array

    Science.gov (United States)

    Burtin, A.; Hovius, N.; Milodowski, D.; Chen, Y.-G.; Wu, Y.-M.; Lin, C.-W.; Chen, H.

    2012-04-01

    The monitoring of geomorphic processes during extreme climatic events is of a primary interest to estimate their impact on the landscape dynamics. However, available techniques to survey the surface activity do not provide a relevant time and/or space resolution. Furthermore, these methods hardly investigate the dynamics of the events since their detection are made a posteriori. To increase our knowledge of the landscape evolution and the influence of extreme climatic events on a catchment dynamics, we need to develop new tools and procedures. In many past works, it has been shown that seismic signals are relevant to detect and locate surface processes (landslides, debris flows). During the 2010 typhoon season, we deployed a network of 12 seismometers dedicated to monitor the surface processes of the Chenyoulan catchment in Taiwan. We test the ability of a two dimensional array and small inter-stations distances (~ 11 km) to map in continuous and at a catchment-scale the geomorphic activity. The spectral analysis of continuous records shows a high-frequency (> 1 Hz) seismic energy that is coherent with the occurrence of hillslope and river processes. Using a basic detection algorithm and a location approach running on the analysis of seismic amplitudes, we manage to locate the catchment activity. We mainly observe short-time events (> 300 occurrences) associated with debris falls and bank collapses during daily convective storms, where 69% of occurrences are coherent with the time distribution of precipitations. We also identify a couple of debris flows during a large tropical storm. In contrast, the FORMOSAT imagery does not detect any activity, which somehow reflects the lack of extreme climatic conditions during the experiment. However, high resolution pictures confirm the existence of links between most of geomorphic events and existing structures (landslide scars, gullies...). We thus conclude to an activity that is dominated by reactivation processes. It

  15. X-ray imager using solution processed organic transistor arrays and bulk heterojunction photodiodes on thin, flexible plastic substrate

    NARCIS (Netherlands)

    Gelinck, G.H.; Kumar, A.; Moet, D.; Steen, J.L. van der; Shafique, U.; Malinowski, P.E.; Myny, K.; Rand, B.P.; Simon, M.; Rütten, W.; Douglas, A.; Jorritsma, J.; Heremans, P.L.; Andriessen, H.A.J.M.

    2013-01-01

    We describe the fabrication and characterization of large-area active-matrix X-ray/photodetector array of high quality using organic photodiodes and organic transistors. All layers with the exception of the electrodes are solution processed. Because it is processed on a very thin plastic substrate

  16. Optimizing solar-cell grid geometry

    Science.gov (United States)

    Crossley, A. P.

    1969-01-01

    Trade-off analysis and mathematical expressions calculate optimum grid geometry in terms of various cell parameters. Determination of the grid geometry provides proper balance between grid resistance and cell output to optimize the energy conversion process.

  17. Security for grids

    Energy Technology Data Exchange (ETDEWEB)

    Humphrey, Marty; Thompson, Mary R.; Jackson, Keith R.

    2005-08-14

    Securing a Grid environment presents a distinctive set of challenges. This paper groups the activities that need to be secured into four categories: naming and authentication; secure communication; trust, policy, and authorization; and enforcement of access control. It examines the current state of the art in securing these processes and introduces new technologies that promise to meet the security requirements of Grids more completely.

  18. Cas4-Dependent Prespacer Processing Ensures High-Fidelity Programming of CRISPR Arrays.

    Science.gov (United States)

    Lee, Hayun; Zhou, Yi; Taylor, David W; Sashital, Dipali G

    2018-04-05

    CRISPR-Cas immune systems integrate short segments of foreign DNA as spacers into the host CRISPR locus to provide molecular memory of infection. Cas4 proteins are widespread in CRISPR-Cas systems and are thought to participate in spacer acquisition, although their exact function remains unknown. Here we show that Bacillus halodurans type I-C Cas4 is required for efficient prespacer processing prior to Cas1-Cas2-mediated integration. Cas4 interacts tightly with the Cas1 integrase, forming a heterohexameric complex containing two Cas1 dimers and two Cas4 subunits. In the presence of Cas1 and Cas2, Cas4 processes double-stranded substrates with long 3' overhangs through site-specific endonucleolytic cleavage. Cas4 recognizes PAM sequences within the prespacer and prevents integration of unprocessed prespacers, ensuring that only functional spacers will be integrated into the CRISPR array. Our results reveal the critical role of Cas4 in maintaining fidelity during CRISPR adaptation, providing a structural and mechanistic model for prespacer processing and integration. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Importance of Grid Center Arrangement

    Science.gov (United States)

    Pasaogullari, O.; Usul, N.

    2012-12-01

    In Digital Elevation Modeling, grid size is accepted to be the most important parameter. Despite the point density and/or scale of the source data, it is freely decided by the user. Most of the time, arrangement of the grid centers are ignored, even most GIS packages omit the choice of grid center coordinate selection. In our study; importance of the arrangement of grid centers is investigated. Using the analogy between "Raster Grid DEM" and "Bitmap Image", importance of placement of grid centers in DEMs are measured. The study has been conducted on four different grid DEMs obtained from a half ellipsoid. These grid DEMs are obtained in such a way that they are half grid size apart from each other. Resulting grid DEMs are investigated through similarity measures. Image processing scientists use different measures to investigate the dis/similarity between the images and the amount of different information they carry. Grid DEMs are projected to a finer grid in order to co-center. Similarity measures are then applied to each grid DEM pairs. These similarity measures are adapted to DEM with band reduction and real number operation. One of the measures gives function graph and the others give measure matrices. Application of similarity measures to six grid DEM pairs shows interesting results. These four different grid DEMs are created with the same method for the same area, surprisingly; thirteen out of 14 measures state that, the half grid size apart grid DEMs are different from each other. The results indicated that although grid DEMs carry mutual information, they have also additional individual information. In other words, half grid size apart constructed grid DEMs have non-redundant information.; Joint Probability Distributions Function Graphs

  20. Evaluation of a new welding process used to joint grids thimbles properly in 16 x 16 fuel assemblies, using the finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Schettino, Carlos Frederico Mattos, E-mail: DPNcarlosschettino@inb.gov.b [Industrias Nucleares do Brasil S.A. (DPN/INB), Resende, RJ (Brazil). Diretoria de Producao Nuclear; Silva, Marcio Adriano Coelho da, E-mail: marcio.adriano@inb.gov.b [Industrias Nucleares do Brasil S.A. (GEACON/INB), Resende, RJ (Brazil). Gerencia de Analise Tecnica do Combustivel

    2011-07-01

    The present work aims to evaluate structurally the new welding process used to join the grids to the guide thimbles properly in 16 x 16 fuel assemblies. This new process is an increase of the number of welding points, 4 to 8, between grids and guide thimbles, giving more stiffness to the whole structure. A finite element model of the fuel assembly design was generated in the program ANSYS 12.1. To build this model were used elements BEAM-4 and several spring type elements. The analysis covered specific loads and displacements, simulating the boundaries conditions found during small deflection acting on the entire structure. The method used to development this analysis was the simulation of a finite element model performing a fuel assembly with four weld points on each grid cell containing the guide thimbles, and then the results of it was compare with another model, with eight weld points on each grid cell containing the guide thimbles. The behavior of the structure under the acting displacement and the related results of the analysis, mainly the stiffness, were satisfied. The results of this analysis were used to prove that the new grid to guide thimble welding process improve the dimensional stability when submitted to loads and displacements required on the fuel assembly design. The performed analysis provided INB to get more information of extreme importance, for the continuity of the development of new process of manufacturing and to improve the design of the current fuel assemblies used in reactors. (author)

  1. Evaluation of a new welding process used to joint grids thimbles properly in 16 x 16 fuel assemblies, using the finite element method

    International Nuclear Information System (INIS)

    Schettino, Carlos Frederico Mattos; Silva, Marcio Adriano Coelho da

    2011-01-01

    The present work aims to evaluate structurally the new welding process used to join the grids to the guide thimbles properly in 16 x 16 fuel assemblies. This new process is an increase of the number of welding points, 4 to 8, between grids and guide thimbles, giving more stiffness to the whole structure. A finite element model of the fuel assembly design was generated in the program ANSYS 12.1. To build this model were used elements BEAM-4 and several spring type elements. The analysis covered specific loads and displacements, simulating the boundaries conditions found during small deflection acting on the entire structure. The method used to development this analysis was the simulation of a finite element model performing a fuel assembly with four weld points on each grid cell containing the guide thimbles, and then the results of it was compare with another model, with eight weld points on each grid cell containing the guide thimbles. The behavior of the structure under the acting displacement and the related results of the analysis, mainly the stiffness, were satisfied. The results of this analysis were used to prove that the new grid to guide thimble welding process improve the dimensional stability when submitted to loads and displacements required on the fuel assembly design. The performed analysis provided INB to get more information of extreme importance, for the continuity of the development of new process of manufacturing and to improve the design of the current fuel assemblies used in reactors. (author)

  2. The Grid

    CERN Document Server

    Klotz, Wolf-Dieter

    2005-01-01

    Grid technology is widely emerging. Grid computing, most simply stated, is distributed computing taken to the next evolutionary level. The goal is to create the illusion of a simple, robust yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources. This talk will give a short history how, out of lessons learned from the Internet, the vision of Grids was born. Then the extensible anatomy of a Grid architecture will be discussed. The talk will end by presenting a selection of major Grid projects in Europe and US and if time permits a short on-line demonstration.

  3. Magnetic alloy nanowire arrays with different lengths: Insights into the crossover angle of magnetization reversal process

    Energy Technology Data Exchange (ETDEWEB)

    Samanifar, S.; Alikhani, M. [Department of Physics, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Almasi Kashi, M., E-mail: almac@kashanu.ac.ir [Department of Physics, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Institute of Nanoscience and Nanotechnology, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Ramazani, A. [Department of Physics, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Institute of Nanoscience and Nanotechnology, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of); Montazer, A.H. [Institute of Nanoscience and Nanotechnology, University of Kashan, Kashan 87317-51167 (Iran, Islamic Republic of)

    2017-05-15

    Nanoscale magnetic alloy wires are being actively investigated, providing fundamental insights into tuning properties in magnetic data storage and processing technologies. However, previous studies give trivial information about the crossover angle of magnetization reversal process in alloy nanowires (NWs). Here, magnetic alloy NW arrays with different compositions, composed of Fe, Co and Ni have been electrochemically deposited into hard-anodic aluminum oxide templates with a pore diameter of approximately 150 nm. Under optimized conditions of alumina barrier layer and deposition bath concentrations, the resulting alloy NWs with aspect ratio and saturation magnetization (M{sub s}) up to 550 and 1900 emu cm{sup −3}, respectively, are systematically investigated in terms of composition, crystalline structure and magnetic properties. Using angular dependence of coercivity extracted from hysteresis loops, the reversal processes are evaluated, indicating non-monotonic behavior. The crossover angle (θ{sub c}) is found to depend on NW length and M{sub s}. At a constant M{sub s}, increasing NW length decreases θ{sub c}, thereby decreasing the involvement of vortex mode during the magnetization reversal process. On the other hand, decreasing M{sub s} decreases θ{sub c} in large aspect ratio (>300) alloy NWs. Phenomenologically, it is newly found that increasing Ni content in the composition decreases θ{sub c}. The angular first-order reversal curve (AFORC) measurements including the irreversibility of magnetization are also investigated to gain a more detailed insight into θ{sub c}. - Highlights: • Magnetic alloy NWs with aspect ratios up to 550 were fabricated into hard-AAO templates. • Morphology, composition, crystal structure and magnetic properties were investigated. • Angular dependence of coercivity was used to describe the magnetization reversal process. • The crossover angle of magnetization reversal was found to depend on NW length and M{sub s}.

  4. Process Development of Gallium Nitride Phosphide Core-Shell Nanowire Array Solar Cell

    Science.gov (United States)

    Chuang, Chen

    Dilute Nitride GaNP is a promising materials for opto-electronic applications due to its band gap tunability. The efficiency of GaNxP1-x /GaNyP1-y core-shell nanowire solar cell (NWSC) is expected to reach as high as 44% by 1% N and 9% N in the core and shell, respectively. By developing such high efficiency NWSCs on silicon substrate, a further reduction of the cost of solar photovoltaic can be further reduced to 61$/MWh, which is competitive to levelized cost of electricity (LCOE) of fossil fuels. Therefore, a suitable NWSC structure and fabrication process need to be developed to achieve this promising NWSC. This thesis is devoted to the study on the development of fabrication process of GaNxP 1-x/GaNyP1-y core-shell Nanowire solar cell. The thesis is divided into two major parts. In the first parts, previously grown GaP/GaNyP1-y core-shell nanowire samples are used to develop the fabrication process of Gallium Nitride Phosphide nanowire solar cell. The design for nanowire arrays, passivation layer, polymeric filler spacer, transparent col- lecting layer and metal contact are discussed and fabricated. The property of these NWSCs are also characterized to point out the future development of Gal- lium Nitride Phosphide NWSC. In the second part, a nano-hole template made by nanosphere lithography is studied for selective area growth of nanowires to improve the structure of core-shell NWSC. The fabrication process of nano-hole templates and the results are presented. To have a consistent features of nano-hole tem- plate, the Taguchi Method is used to optimize the fabrication process of nano-hole templates.

  5. Synchrophasor Sensing and Processing based Smart Grid Security Assessment for Renewable Energy Integration

    Science.gov (United States)

    Jiang, Huaiguang

    With the evolution of energy and power systems, the emerging Smart Grid (SG) is mainly featured by distributed renewable energy generations, demand-response control and huge amount of heterogeneous data sources. Widely distributed synchrophasor sensors, such as phasor measurement units (PMUs) and fault disturbance recorders (FDRs), can record multi-modal signals, for power system situational awareness and renewable energy integration. An effective and economical approach is proposed for wide-area security assessment. This approach is based on wavelet analysis for detecting and locating the short-term and long-term faults in SG, using voltage signals collected by distributed synchrophasor sensors. A data-driven approach for fault detection, identification and location is proposed and studied. This approach is based on matching pursuit decomposition (MPD) using Gaussian atom dictionary, hidden Markov model (HMM) of real-time frequency and voltage variation features, and fault contour maps generated by machine learning algorithms in SG systems. In addition, considering the economic issues, the placement optimization of distributed synchrophasor sensors is studied to reduce the number of the sensors without affecting the accuracy and effectiveness of the proposed approach. Furthermore, because the natural hazards is a critical issue for power system security, this approach is studied under different types of faults caused by natural hazards. A fast steady-state approach is proposed for voltage security of power systems with a wind power plant connected. The impedance matrix can be calculated by the voltage and current information collected by the PMUs. Based on the impedance matrix, locations in SG can be identified, where cause the greatest impact on the voltage at the wind power plants point of interconnection. Furthermore, because this dynamic voltage security assessment method relies on time-domain simulations of faults at different locations, the proposed approach

  6. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    Science.gov (United States)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  7. Low-Noise CMOS Circuits for On-Chip Signal Processing in Focal-Plane Arrays

    Science.gov (United States)

    Pain, Bedabrata

    The performance of focal-plane arrays can be significantly enhanced through the use of on-chip signal processing. Novel, in-pixel, on-focal-plane, analog signal-processing circuits for high-performance imaging are presented in this thesis. The presence of a high background-radiation is a major impediment for infrared focal-plane array design. An in-pixel, background-suppression scheme, using dynamic analog current memory circuit, is described. The scheme also suppresses spatial noise that results from response non-uniformities of photo-detectors, leading to background limited infrared detector readout performance. Two new, low-power, compact, current memory circuits, optimized for operation at ultra-low current levels required in infrared-detection, are presented. The first one is a self-cascading current memory that increases the output impedance, and the second one is a novel, switch feed-through reducing current memory, implemented using error-current feedback. This circuit can operate with a residual absolute -error of less than 0.1%. The storage-time of the memory is long enough to also find applications in neural network circuits. In addition, a voltage-mode, accurate, low-offset, low-power, high-uniformity, random-access sample-and-hold cell, implemented using a CCD with feedback, is also presented for use in background-suppression and neural network applications. A new, low noise, ultra-low level signal readout technique, implemented by individually counting photo-electrons within the detection pixel, is presented. The output of each unit-cell is a digital word corresponding to the intensity of the photon flux, and the readout is noise free. This technique requires the use of unit-cell amplifiers that feature ultra-high-gain, low-power, self-biasing capability and noise in sub-electron levels. Both single-input and differential-input implementations of such amplifiers are investigated. A noise analysis technique is presented for analyzing sampled

  8. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    NARCIS (Netherlands)

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic

  9. A novel, substrate independent three-step process for the growth of uniform ZnO nanorod arrays

    International Nuclear Information System (INIS)

    Byrne, D.; McGlynn, E.; Henry, M.O.; Kumar, K.; Hughes, G.

    2010-01-01

    We report a three-step deposition process for uniform arrays of ZnO nanorods, involving chemical bath deposition of aligned seed layers followed by nanorod nucleation sites and subsequent vapour phase transport growth of nanorods. This combines chemical bath deposition techniques, which enable substrate independent seeding and nucleation site generation with vapour phase transport growth of high crystalline and optical quality ZnO nanorod arrays. Our data indicate that the three-step process produces uniform nanorod arrays with narrow and rather monodisperse rod diameters (∼ 70 nm) across substrates of centimetre dimensions. X-ray photoelectron spectroscopy, scanning electron microscopy and X-ray diffraction were used to study the growth mechanism and characterise the nanostructures.

  10. Comparison of Field Measurements and EMT Simulation Results on a Multi-Level STATCOM for Grid Integration of London Array Wind Farm

    DEFF Research Database (Denmark)

    Glasdam, Jakob Bærholm; Kocewiak, Lukasz; Hjerrild, Jesper

    2014-01-01

    of the evaluation of the simulation models of the OWPP components. The purpose of this paper is to develop and validate an electromagnetic transient (EMT) generic model of the modular multi-level cascaded converter (MMCC) STATCOM based on comparison with field measurements. For this purpose, measurement data have...... been acquired on a commercial ±50 MVAr state-of-the-art (SOA) MMCC STATCOM. The STATCOM is located at the point of common connection (PCC) at the world’s largest OWPP, London Array (LAOWPP). According to the authors’ knowledge, the presented paper will be the first of its kind to compare a detailed...

  11. Mapping of grid faults and grid codes

    DEFF Research Database (Denmark)

    Iov, Florin; Hansen, A.D.; Sørensen, P.

    loads of wind turbines. The goal is also to clarify and define possible new directions in the certification process of power plant wind turbines, namely wind turbines, which participate actively in the stabilisation of power systems. Practical experience shows that there is a need...... challenges for the design of both the electrical system and the mechanical structure of wind turbines. An overview over the frequency of grid faults and the grid connection requirements in different relevant countries is done in this report. The most relevant study cases for the quantification of the loads......The present report is a part of the research project "Grid fault and design basis for wind turbine" supported by Energinet.dk through the grant PSO F&U 6319. The objective of this project is to investigate into the consequences of the new grid connection requirements for the fatigue and extreme...

  12. Mapping of grid faults and grid codes

    DEFF Research Database (Denmark)

    Iov, F.; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    loads of wind turbines. The goal is also to clarify and define possible new directions in the certification process of power plant wind turbines, namely wind turbines, which participate actively in the stabilisation of power systems. Practical experience shows that there is a need...... challenges for the design of both the electrical system and the mechanical structure of wind turbines. An overview over the frequency of grid faults and the grid connection requirements in different relevant countries is done in this report. The most relevant study cases for the quantification of the loads......The present report is a part of the research project ''Grid fault and designbasis for wind turbine'' supported by Energinet.dk through the grant PSO F&U 6319. The objective of this project is to investigate into the consequences of the new grid connection requirements for the fatigue and extreme...

  13. Proceedings of the Adaptive Sensor Array Processing Workshop (12th) Held in Lexington, MA on 16-18 March 2004 (CD-ROM)

    National Research Council Canada - National Science Library

    James, F

    2004-01-01

    ...: The twelfth annual workshop on Adaptive Sensor Array Processing presented a diverse agenda featuring new work on adaptive methods for communications, radar and sonar, algorithmic challenges posed...

  14. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    Science.gov (United States)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  15. Meet the Grid

    CERN Multimedia

    Yurkewicz, Katie

    2005-01-01

    Today's cutting-edge scientific projects are larger, more complex, and more expensive than ever. Grid computing provides the resources that allow researchers to share knowledge, data, and computer processing power across boundaries

  16. The GRID seminar

    CERN Multimedia

    CERN. Geneva HR-RFA

    2006-01-01

    The Grid infrastructure is a key part of the computing environment for the simulation, processing and analysis of the data of the LHC experiments. These experiments depend on the availability of a worldwide Grid infrastructure in several aspects of their computing model. The Grid middleware will hide much of the complexity of this environment to the user, organizing all the resources in a coherent virtual computer center. The general description of the elements of the Grid, their interconnections and their use by the experiments will be exposed in this talk. The computational and storage capability of the Grid is attracting other research communities beyond the high energy physics. Examples of these applications will be also exposed during the presentation.

  17. Constitutive Modelling in Thermomechanical Processes, Using The Control Volume Method on Staggered Grid

    DEFF Research Database (Denmark)

    Thorborg, Jesper

    , however, is constituted by the implementation of the $J_2$ flow theory in the control volume method. To apply the control volume formulation on the process of hardening concrete viscoelastic stress-strain models has been examined in terms of various rheological models. The generalized 3D models are based...... on two different suggestions in the literature, that is compressible or incompressible behaviour of the viscos response in the dashpot element. Numerical implementation of the models has shown very good agreement with corresponding analytical solutions. The viscoelastic solid mechanical model is used...

  18. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  19. Al-doped ZnO/Ag grid hybrid transparent conductive electrodes fabricated using a low-temperature process

    Energy Technology Data Exchange (ETDEWEB)

    An, Ha-Rim; Oh, Sung-Tag [Department of Materials Science and Engineering, Seoul National University of Science and Technology, Seoul 139-743 (Korea, Republic of); Kim, Chang Yeoul [Future Convergence Ceramic Division, Korea Institute Ceramic Engineering and Technology (KICET), Seoul 233-5 (Korea, Republic of); Baek, Seong-Ho [Energy Research Division, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 711-873 (Korea, Republic of); Park, Il-Kyu, E-mail: ikpark@ynu.ac.kr [Department of Electronic Engineering, Yeungnam University, Gyeongbuk 712-749 (Korea, Republic of); Ahn, Hyo-Jin, E-mail: hjahn@seoultech.ac.kr [Department of Materials Science and Engineering, Seoul National University of Science and Technology, Seoul 139-743 (Korea, Republic of)

    2014-12-05

    Highlights: • Al-doped ZnO/Ag transparent conductive electrode is fabricated at low temperature. • Performance of the hybrid transparent conductive electrode affected by the structure. • The performance enhancement mechanism is suggested. - Abstract: Al-doped ZnO (AZO)/Ag grid hybrid transparent conductive electrode (TCE) structures were fabricated at a low temperature by using electrohydrodynamic jet printing for the Ag grids and atomic layer deposition for the AZO layers. The structural investigations showed that the AZO/Ag grid hybrid structures consisted of Ag grid lines formed by Ag particles and the AZO layer covering the inter-spacing between the Ag grid lines. The Ag particles comprising the Ag grid lines were also capped by thin AZO layers, and the coverage of the AZO layers was increased with increasing the thickness of the AZO layer. Using the optimum thickness of AZO layer of 70 nm, the hybrid TCE structure showed an electrical resistivity of 5.45 × 10{sup −5} Ω cm, an optical transmittance of 80.80%, and a figure of merit value of 1.41 × 10{sup −2} Ω{sup −1}. The performance enhancement was suggested based on the microstructural investigations on the AZO/Ag grid hybrid structures.

  20. An Optimized Multicolor Point-Implicit Solver for Unstructured Grid Applications on Graphics Processing Units

    Science.gov (United States)

    Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana

    2016-01-01

    In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.

  1. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    Energy Technology Data Exchange (ETDEWEB)

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-30

    primarily the result of spurious identification and incorrect association of phases, and of excessive variability in estimates for the velocity and direction of incoming seismic phases. The mitigation of these causes has led to the development of two complimentary techniques for classifying seismic sources by testing detected signals under mutually exclusive event hypotheses. Both of these techniques require appropriate calibration data from the region to be monitored, and are therefore ideally suited to mining areas or other sites with recurring seismicity. The first such technique is a classification and location algorithm where a template is designed for each site being monitored which defines which phases should be observed, and at which times, for all available regional array stations. For each phase, the variability of measurements (primarily the azimuth and apparent velocity) from previous events is examined and it is determined which processing parameters (array configuration, data window length, frequency band) provide the most stable results. This allows us to define optimal diagnostic tests for subsequent occurrences of the phase in question. The calibration of templates for this project revealed significant results with major implications for seismic processing in both automatic and analyst reviewed contexts: • one or more fixed frequency bands should be chosen for each phase tested for. • the frequency band providing the most stable parameter estimates varies from site to site and a frequency band which provides optimal measurements for one site may give substantially worse measurements for a nearby site. • slowness corrections applied depend strongly on the frequency band chosen. • the frequency band providing the most stable estimates is often neither the band providing the greatest SNR nor the band providing the best array gain. For this reason, the automatic template location estimates provided here are frequently far better than those obtained by

  2. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    International Nuclear Information System (INIS)

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-01

    primarily the result of spurious identification and incorrect association of phases, and of excessive variability in estimates for the velocity and direction of incoming seismic phases. The mitigation of these causes has led to the development of two complimentary techniques for classifying seismic sources by testing detected signals under mutually exclusive event hypotheses. Both of these techniques require appropriate calibration data from the region to be monitored, and are therefore ideally suited to mining areas or other sites with recurring seismicity. The first such technique is a classification and location algorithm where a template is designed for each site being monitored which defines which phases should be observed, and at which times, for all available regional array stations. For each phase, the variability of measurements (primarily the azimuth and apparent velocity) from previous events is examined and it is determined which processing parameters (array configuration, data window length, frequency band) provide the most stable results. This allows us to define optimal diagnostic tests for subsequent occurrences of the phase in question. The calibration of templates for this project revealed significant results with major implications for seismic processing in both automatic and analyst reviewed contexts: (1) one or more fixed frequency bands should be chosen for each phase tested for; (2) the frequency band providing the most stable parameter estimates varies from site to site and a frequency band which provides optimal measurements for one site may give substantially worse measurements for a nearby site; (3) slowness corrections applied depend strongly on the frequency band chosen; (4) the frequency band providing the most stable estimates is often neither the band providing the greatest SNR nor the band providing the best array gain. For this reason, the automatic template location estimates provided here are frequently far better than those obtained by

  3. Rapid fabrication of an ordered nano-dot array by the combination of nano-plastic forming and annealing methods

    International Nuclear Information System (INIS)

    Yoshino, Masahiko; Ohsawa, Hiroki; Yamanaka, Akinori

    2011-01-01

    In this paper, a new fabrication method for an ordered nano-dot array is developed. Combination of coating, nano-plastic forming and annealing processes is studied to produce uniformly sized and ordered gold nano-dot array on a quartz glass substrate. The experimental results reveal that patterning of a groove grid on the gold-coated substrate with NPF is effective to obtain the ordered gold nano-dot array. In the proposed fabrication process, the size of the gold nano-dot can be controlled by adjusting the groove grid size. A minimum gold nano-dot array fabricated on a quartz-glass substrate was 93 nm in diameter and 100 nm in pitch. Furthermore, the mechanism of nano-dot array generation by the presented process is investigated. Using a theoretical model it is revealed that the proposed method is capable of fabrication of smaller nano-dots than 10 nm by controlling process conditions adequately.

  4. Process-morphology scaling relations quantify self-organization in capillary densified nanofiber arrays.

    Science.gov (United States)

    Kaiser, Ashley L; Stein, Itai Y; Cui, Kehang; Wardle, Brian L

    2018-02-07

    Capillary-mediated densification is an inexpensive and versatile approach to tune the application-specific properties and packing morphology of bulk nanofiber (NF) arrays, such as aligned carbon nanotubes. While NF length governs elasto-capillary self-assembly, the geometry of cellular patterns formed by capillary densified NFs cannot be precisely predicted by existing theories. This originates from the recently quantified orders of magnitude lower than expected NF array effective axial elastic modulus (E), and here we show via parametric experimentation and modeling that E determines the width, area, and wall thickness of the resulting cellular pattern. Both experiments and models show that further tuning of the cellular pattern is possible by altering the NF-substrate adhesion strength, which could enable the broad use of this facile approach to predictably pattern NF arrays for high value applications.

  5. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    OpenAIRE

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic illumination pattern, i.e., the light pattern of a single LED, and the setting of LED illumination levels, respectively. Thereafter, we propose to use the relative mean squared error as the cost function ...

  6. Grid Security

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The aim of Grid computing is to enable the easy and open sharing of resources between large and highly distributed communities of scientists and institutes across many independent administrative domains. Convincing site security officers and computer centre managers to allow this to happen in view of today's ever-increasing Internet security problems is a major challenge. Convincing users and application developers to take security seriously is equally difficult. This paper will describe the main Grid security issues, both in terms of technology and policy, that have been tackled over recent years in LCG and related Grid projects. Achievements to date will be described and opportunities for future improvements will be addressed.

  7. Effect of nano Ni additions on the structure and properties of Sn-9Zn and Sn-Zn-3Bi solders in Au/Ni/Cu ball grid array packages

    Energy Technology Data Exchange (ETDEWEB)

    Gain, Asit Kumar [Department of Electronic Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong (Hong Kong); Chan, Y.C. [Department of Electronic Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong (Hong Kong)], E-mail: eeycchan@cityu.edu.hk; Yung, Winco K.C. [Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong)

    2009-05-25

    The effect of nano Ni additions in Sn-9Zn and Sn-8Zn-3Bi solders on their interfacial microstructures and shear loads with Au/Ni/Cu pad metallization in ball grid array (BGA) applications were investigated. After the addition of nano Ni powder in Sn-based lead-free solders, there were no significant changes in the interfacial microstructure. But, in the solder region a very fine Zn-rich phase was observed. Also on the fracture surfaces a fine Zn-Ni compound was found. After the addition of nano Ni powder in Sn-based solders, the shear loads were increased due to a refinement of the microstructure and in addition, ductile fracture surfaces were clearly observed. The shear loads of the plain Sn-9Zn and Sn-8Zn-3Bi solders after one reflow cycle were about 1798 g and 2059 g, respectively. After the addition of nano Ni powder, their loads were about 2172 g and 2212 g, respectively, after one reflow cycle and their shear loads after eight reflow cycles were about 2099 g and 2081 g, respectively.

  8. Effect of nano Ni additions on the structure and properties of Sn-9Zn and Sn-Zn-3Bi solders in Au/Ni/Cu ball grid array packages

    International Nuclear Information System (INIS)

    Gain, Asit Kumar; Chan, Y.C.; Yung, Winco K.C.

    2009-01-01

    The effect of nano Ni additions in Sn-9Zn and Sn-8Zn-3Bi solders on their interfacial microstructures and shear loads with Au/Ni/Cu pad metallization in ball grid array (BGA) applications were investigated. After the addition of nano Ni powder in Sn-based lead-free solders, there were no significant changes in the interfacial microstructure. But, in the solder region a very fine Zn-rich phase was observed. Also on the fracture surfaces a fine Zn-Ni compound was found. After the addition of nano Ni powder in Sn-based solders, the shear loads were increased due to a refinement of the microstructure and in addition, ductile fracture surfaces were clearly observed. The shear loads of the plain Sn-9Zn and Sn-8Zn-3Bi solders after one reflow cycle were about 1798 g and 2059 g, respectively. After the addition of nano Ni powder, their loads were about 2172 g and 2212 g, respectively, after one reflow cycle and their shear loads after eight reflow cycles were about 2099 g and 2081 g, respectively.

  9. Processing And Display Of Medical Three Dimensional Arrays Of Numerical Data Using Octree Encoding

    Science.gov (United States)

    Amans, Jean-Louis; Darier, Pierre

    1986-05-01

    imaging modalities such as X-Ray computerized Tomography (CT), Nuclear Medecine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging.

  10. Mathematical analysis of the real time array PCR (RTA PCR) process

    NARCIS (Netherlands)

    Dijksman, Johan Frederik; Pierik, A.

    2012-01-01

    Real time array PCR (RTA PCR) is a recently developed biochemical technique that measures amplification curves (like with quantitative real time Polymerase Chain Reaction (qRT PCR)) of a multitude of different templates in a sample. It combines two different methods in order to profit from the

  11. SU-F-P-48: The Quantitative Evaluation and Comparison of Image Distortion and Loss of X-Ray Images Between Anti-Scattered Grid and Moire Compensation Processing in Digital Radiography

    International Nuclear Information System (INIS)

    Chung, W; Jung, J; Kang, Y; Chung, W

    2016-01-01

    Purpose: To quantitatively analyze the influence image processing for Moire elimination has in digital radiography by comparing the image acquired from optimized anti-scattered grid only and the image acquired from software processing paired with misaligned low-frequency grid. Methods: Special phantom, which does not create scattered radiation, was used to acquire non-grid reference images and they were acquired without any grids. A set of images was acquired with optimized grid, aligned to pixel of a detector and other set of images was acquired with misaligned low-frequency grid paired with Moire elimination processing algorithm. X-ray technique used was based on consideration to Bucky factor derived from non-grid reference images. For evaluation, we analyze by comparing pixel intensity of acquired images with grids to that of reference images. Results: When compared to image acquired with optimized grid, images acquired with Moire elimination processing algorithm showed 10 to 50% lower mean contrast value of ROI. Severe distortion of images was found with when the object’s thickness was measured at 7 or less pixels. In this case, contrast value measured from images acquired with Moire elimination processing algorithm was under 30% of that taken from reference image. Conclusion: This study shows the potential risk of Moire compensation images in diagnosis. Images acquired with misaligned low-frequency grid results in Moire noise and Moire compensation processing algorithm used to remove this Moire noise actually caused an image distortion. As a result, fractures and/or calcifications which are presented in few pixels only may not be diagnosed properly. In future work, we plan to evaluate the images acquired without grid but based on 100% image processing and the potential risks it possesses.

  12. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-10

    circuit boards. A computational electromagnetics software package, FEKO [24], is used to model the antenna arrays, and the RMIM [12] is used to...Symposium on Intelligent Signal Processing and Communications Systems, Chengdu, China, 2010. [24] FEKO Suite 6.3, EM Software & Systems- S.A. (Pty) Ltd...including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services , Directorate for Information Operations and

  13. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  14. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    emergence of supercomputers led to the use of computer simula- tion as an .... Scientific and engineering applications (e.g., Tera grid secure gate way). Collaborative ... Encryption, privacy, protection from malicious software. Physical Layer.

  15. Applying Hillslope Hydrology to Bridge between Ecosystem and Grid-Scale Processes within an Earth System Model

    Science.gov (United States)

    Subin, Z. M.; Sulman, B. N.; Malyshev, S.; Shevliakova, E.

    2013-12-01

    Soil moisture is a crucial control on surface energy fluxes, vegetation properties, and soil carbon cycling. Its interactions with ecosystem processes are highly nonlinear across a large range, as both drought stress and anoxia can impede vegetation and microbial growth. Earth System Models (ESMs) generally only represent an average soil-moisture state in grid cells at scales of 50-200 km, and as a result are not able to adequately represent the effects of subgrid heterogeneity in soil moisture, especially in regions with large wetland areas. We addressed this deficiency by developing the first ESM-coupled subgrid hillslope-hydrological model, TiHy (Tiled-hillslope Hydrology), embedded within the Geophysical Fluid Dynamics Laboratory (GFDL) land model. In each grid cell, one or more representative hillslope geometries are discretized into land model tiles along an upland-to-lowland gradient. These geometries represent ~1 km hillslope-scale hydrological features and allow for flexible representation of hillslope profile and plan shapes, in addition to variation of subsurface properties among or within hillslopes. Each tile (which may represent ~100 m along the hillslope) has its own surface fluxes, vegetation state, and vertically-resolved state variables for soil physics and biogeochemistry. Resolution of water state in deep layers (~200 m) down to bedrock allows for physical integration of groundwater transport with unsaturated overlying dynamics. Multiple tiles can also co-exist at the same vertical position along the hillslope, allowing the simulation of ecosystem heterogeneity due to disturbance. The hydrological model is coupled to the vertically-resolved Carbon, Organisms, Respiration, and Protection in the Soil Environment (CORPSE) model, which captures non-linearity resulting from interactions between vertically-heterogeneous soil carbon and water profiles. We present comparisons of simulated water table depth to observations. We examine sensitivities to

  16. Grid Synchronization for Distributed Generations

    DEFF Research Database (Denmark)

    Peyghami, Saeed; Mokhtari, Hossein; Blaabjerg, Frede

    2017-01-01

    Distributed generators (DGs) like photovoltaic arrays, wind turbines, and fuel cell modules, as well as distributed storage (DS) units introduce some advantages to the power systems and make it more reliable, flexible, and controllable in comparison with the conventional power systems. Grid inter...

  17. Real-time data acquisition and parallel data processing solution for TJ-II Bolometer arrays diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain)]. E-mail: eduardo.barrera@upm.es; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Lopez, S. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Machon, D. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain); Ochando, M. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain)

    2006-07-15

    Maps of local plasma emissivity of TJ-II plasmas are determined using three-array cameras of silicon photodiodes (AXUV type from IRD). They have assigned the top and side ports of the same sector of the vacuum vessel. Each array consists of 20 unfiltered detectors. The signals from each of these detectors are the inputs to an iterative algorithm of tomographic reconstruction. Currently, these signals are acquired by a PXI standard system at approximately 50 kS/s, with 12 bits of resolution and are stored for off-line processing. A 0.5 s discharge generates 3 Mbytes of raw data. The algorithm's load exceeds the CPU capacity of the PXI system's controller in a continuous mode, making unfeasible to process the samples in parallel with their acquisition in a PXI standard system. A new architecture model has been developed, making possible to add one or several processing cards to a standard PXI system. With this model, it is possible to define how to distribute, in real-time, the data from all acquired signals in the system among the processing cards and the PXI controller. This way, by distributing the data processing among the system controller and two processing cards, the data processing can be done in parallel with the acquisition. Hence, this system configuration would be able to measure even in long pulse devices.

  18. Process Development And Simulation For Cold Fabrication Of Doubly Curved Metal Plate By Using Line Array Roll Set

    International Nuclear Information System (INIS)

    Shim, D. S.; Jung, C. G.; Seong, D. Y.; Yang, D. Y.; Han, J. M.; Han, M. S.

    2007-01-01

    For effective manufacturing of a doubly curved sheet metal, a novel sheet metal forming process is proposed. The suggested process uses a Line Array Roll Set (LARS) composed of a pair of upper and lower roll assemblies in a symmetric manner. The process offers flexibility as compared with the conventional manufacturing processes, because it does not require any complex-shaped die and loss of material by blank-holding is minimized. LARS allows flexibility of the incremental forming process and adopts the principle of bending deformation, resulting in a slight deformation in thickness. Rolls composed of line array roll sets are divided into a driving roll row and two idle roll rows. The arrayed rolls in the central lines of the upper and lower roll assemblies are motor-driven so that they deform and transfer the sheet metal using friction between the rolls and the sheet metal. The remaining rolls are idle rolls, generating bending deformation with driving rolls. Furthermore, all the rolls are movable in any direction so that they are adaptable to any size or shape of the desired three-dimensional configuration. In the process, the sheet is deformed incrementally as deformation proceeds simultaneously in rolling and transverse directions step by step. Consequently, it can be applied to the fabrication of doubly curved ship hull plates by undergoing several passes. In this work, FEM simulations are carried out for verification of the proposed incremental forming system using the chosen design parameters. Based on the results of the simulation, the relationship between the roll set configuration and the curvature of a sheet metal is determined. The process information such as the forming loads and torques acting on every roll is analyzed as important data for the design and development of the manufacturing system

  19. Fabrication process for CMUT arrays with polysilicon electrodes, nanometre precision cavity gaps and through-silicon vias

    International Nuclear Information System (INIS)

    Due-Hansen, J; Poppe, E; Summanwar, A; Jensen, G U; Breivik, L; Wang, D T; Schjølberg-Henriksen, K; Midtbø, K

    2012-01-01

    Capacitive micromachined ultrasound transducers (CMUTs) can be used to realize miniature ultrasound probes. Through-silicon vias (TSVs) allow for close integration of the CMUT and read-out electronics. A fabrication process enabling the realization of a CMUT array with TSVs is being developed. The integrated process requires the formation of highly doped polysilicon electrodes with low surface roughness. A process for polysilicon film deposition, doping, CMP, RIE and thermal annealing that resulted in a film with sheet resistance of 4.0 Ω/□ and a surface roughness of 1 nm rms has been developed. The surface roughness of the polysilicon film was found to increase with higher phosphorus concentrations. The surface roughness also increased when oxygen was present in the thermal annealing ambient. The RIE process for etching CMUT cavities in the doped polysilicon gave a mean etch depth of 59.2 ± 3.9 nm and a uniformity across the wafer ranging from 1.0 to 4.7%. The two presented processes are key processes that enable the fabrication of CMUT arrays suitable for applications in for instance intravascular cardiology and gastrointestinal imaging. (paper)

  20. Conversion of electromagnetic energy in Z-pinch process of single planar wire arrays at 1.5 MA

    International Nuclear Information System (INIS)

    Liangping, Wang; Mo, Li; Juanjuan, Han; Ning, Guo; Jian, Wu; Aici, Qiu

    2014-01-01

    The electromagnetic energy conversion in the Z-pinch process of single planar wire arrays was studied on Qiangguang generator (1.5 MA, 100 ns). Electrical diagnostics were established to monitor the voltage of the cathode-anode gap and the load current for calculating the electromagnetic energy. Lumped-element circuit model of wire arrays was employed to analyze the electromagnetic energy conversion. Inductance as well as resistance of a wire array during the Z-pinch process was also investigated. Experimental data indicate that the electromagnetic energy is mainly converted to magnetic energy and kinetic energy and ohmic heating energy can be neglected before the final stagnation. The kinetic energy can be responsible for the x-ray radiation before the peak power. After the stagnation, the electromagnetic energy coupled by the load continues increasing and the resistance of the load achieves its maximum of 0.6–1.0 Ω in about 10–20 ns

  1. Optimal control of stretching process of flexible solar arrays on spacecraft based on a hybrid optimization strategy

    Directory of Open Access Journals (Sweden)

    Qijia Yao

    2017-07-01

    Full Text Available The optimal control of multibody spacecraft during the stretching process of solar arrays is investigated, and a hybrid optimization strategy based on Gauss pseudospectral method (GPM and direct shooting method (DSM is presented. First, the elastic deformation of flexible solar arrays was described approximately by the assumed mode method, and a dynamic model was established by the second Lagrangian equation. Then, the nonholonomic motion planning problem is transformed into a nonlinear programming problem by using GPM. By giving fewer LG points, initial values of the state variables and control variables were obtained. A serial optimization framework was adopted to obtain the approximate optimal solution from a feasible solution. Finally, the control variables were discretized at LG points, and the precise optimal control inputs were obtained by DSM. The optimal trajectory of the system can be obtained through numerical integration. Through numerical simulation, the stretching process of solar arrays is stable with no detours, and the control inputs match the various constraints of actual conditions. The results indicate that the method is effective with good robustness. Keywords: Motion planning, Multibody spacecraft, Optimal control, Gauss pseudospectral method, Direct shooting method

  2. Fully Integrated Linear Single Photon Avalanche Diode (SPAD) Array with Parallel Readout Circuit in a Standard 180 nm CMOS Process

    Science.gov (United States)

    Isaak, S.; Bull, S.; Pitter, M. C.; Harrison, Ian.

    2011-05-01

    This paper reports on the development of a SPAD device and its subsequent use in an actively quenched single photon counting imaging system, and was fabricated in a UMC 0.18 μm CMOS process. A low-doped p- guard ring (t-well layer) encircling the active area to prevent the premature reverse breakdown. The array is a 16×1 parallel output SPAD array, which comprises of an active quenched SPAD circuit in each pixel with the current value being set by an external resistor RRef = 300 kΩ. The SPAD I-V response, ID was found to slowly increase until VBD was reached at excess bias voltage, Ve = 11.03 V, and then rapidly increase due to avalanche multiplication. Digital circuitry to control the SPAD array and perform the necessary data processing was designed in VHDL and implemented on a FPGA chip. At room temperature, the dark count was found to be approximately 13 KHz for most of the 16 SPAD pixels and the dead time was estimated to be 40 ns.

  3. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  4. Dependently typed array programs don’t go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2009-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  5. Dependently typed array programs don't go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2008-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  6. Workshop on Future Generation Grids

    CERN Document Server

    Laforenza, Domenico; Reinefeld, Alexander

    2006-01-01

    The Internet and the Web continue to have a major impact on society. By allowing us to discover and access information on a global scale, they have created entirely new businesses and brought new meaning to the term surf. In addition, however, we want processing, and increasingly, we want collaborative processing within distributed teams. This need has led to the creation of the Grid - an infrastructure that enables us to share capabilities, and integrate services and resources within and across enterprises. "Future Generation Grids" is the second in the "CoreGRID" series. This edited volume brings together contributed articles by scientists and researchers in the Grid community in an attempt to draw a clearer picture of the future generation Grids. This book also identifies some of the most challenging problems on the way to achieving the invisible Grid ideas

  7. A Spatiotemporal Indexing Approach for Efficient Processing of Big Array-Based Climate Data with MapReduce

    Science.gov (United States)

    Li, Zhenlong; Hu, Fei; Schnase, John L.; Duffy, Daniel Q.; Lee, Tsengdar; Bowen, Michael K.; Yang, Chaowei

    2016-01-01

    Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (10 speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.

  8. A Novel Self-aligned and Maskless Process for Formation of Highly Uniform Arrays of Nanoholes and Nanopillars

    Directory of Open Access Journals (Sweden)

    Wu Wei

    2008-01-01

    Full Text Available AbstractFabrication of a large area of periodic structures with deep sub-wavelength features is required in many applications such as solar cells, photonic crystals, and artificial kidneys. We present a low-cost and high-throughput process for realization of 2D arrays of deep sub-wavelength features using a self-assembled monolayer of hexagonally close packed (HCP silica and polystyrene microspheres. This method utilizes the microspheres as super-lenses to fabricate nanohole and pillar arrays over large areas on conventional positive and negative photoresist, and with a high aspect ratio. The period and diameter of the holes and pillars formed with this technique can be controlled precisely and independently. We demonstrate that the method can produce HCP arrays of hole of sub-250 nm size using a conventional photolithography system with a broadband UV source centered at 400 nm. We also present our 3D FDTD modeling, which shows a good agreement with the experimental results.

  9. Solution-Processed Wide-Bandgap Organic Semiconductor Nanostructures Arrays for Nonvolatile Organic Field-Effect Transistor Memory.

    Science.gov (United States)

    Li, Wen; Guo, Fengning; Ling, Haifeng; Liu, Hui; Yi, Mingdong; Zhang, Peng; Wang, Wenjun; Xie, Linghai; Huang, Wei

    2018-01-01

    In this paper, the development of organic field-effect transistor (OFET) memory device based on isolated and ordered nanostructures (NSs) arrays of wide-bandgap (WBG) small-molecule organic semiconductor material [2-(9-(4-(octyloxy)phenyl)-9H-fluoren-2-yl)thiophene]3 (WG 3 ) is reported. The WG 3 NSs are prepared from phase separation by spin-coating blend solutions of WG 3 /trimethylolpropane (TMP), and then introduced as charge storage elements for nonvolatile OFET memory devices. Compared to the OFET memory device with smooth WG 3 film, the device based on WG 3 NSs arrays exhibits significant improvements in memory performance including larger memory window (≈45 V), faster switching speed (≈1 s), stable retention capability (>10 4 s), and reliable switching properties. A quantitative study of the WG 3 NSs morphology reveals that enhanced memory performance is attributed to the improved charge trapping/charge-exciton annihilation efficiency induced by increased contact area between the WG 3 NSs and pentacene layer. This versatile solution-processing approach to preparing WG 3 NSs arrays as charge trapping sites allows for fabrication of high-performance nonvolatile OFET memory devices, which could be applicable to a wide range of WBG organic semiconductor materials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Nanofabrication and characterization of ZnO nanorod arrays and branched microrods by aqueous solution route and rapid thermal processing

    International Nuclear Information System (INIS)

    Lupan, Oleg; Chow, Lee; Chai, Guangyu; Roldan, Beatriz; Naitabdi, Ahmed; Schulte, Alfons; Heinrich, Helge

    2007-01-01

    This paper presents an inexpensive and fast fabrication method for one-dimensional (1D) ZnO nanorod arrays and branched two-dimensional (2D), three-dimensional (3D) - nanoarchitectures. Our synthesis technique includes the use of an aqueous solution route and post-growth rapid thermal annealing. It permits rapid and controlled growth of ZnO nanorod arrays of 1D - rods, 2D - crosses, and 3D - tetrapods without the use of templates or seeds. The obtained ZnO nanorods are uniformly distributed on the surface of Si substrates and individual or branched nano/microrods can be easily transferred to other substrates. Process parameters such as concentration, temperature and time, type of substrate and the reactor design are critical for the formation of nanorod arrays with thin diameter and transferable nanoarchitectures. X-ray diffraction, scanning electron microscopy, X-ray photoelectron spectroscopy, transmission electron microscopy and Micro-Raman spectroscopy have been used to characterize the samples

  11. A new process for fabricating nanodot arrays on selective regions with diblock copolymer thin film

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dae-Ho [Department of Materials Science and Engineering, Polymer Research Institute, Pohang University of Science and Technology, San 31, Hyoja-Dong, Nam-Gu, Pohang 790-784 (Korea, Republic of)

    2007-09-12

    A procedure for micropatterning a single layer of nanodot arrays in selective regions is demonstrated by using thin films of polystyrene-b-poly(t-butyl acrylate) (PS-b-PtBA) diblock copolymer. The thin-film self-assembled into hexagonally arranged PtBA nanodomains in a PS matrix on a substrate by solvent annealing with 1,4-dioxane. The PtBA nanodomains were converted into poly(acrylic acid) (PAA) having carboxylic-acid-functionalized nanodomains by exposure to hydrochloric acid vapor, or were removed by ultraviolet (UV) irradiation to generate vacant sites without any functional groups due to the elimination of PtBA domains. By sequential treatment with aqueous sodium bicarbonate and aqueous zinc acetate solution, zinc cations were selectively loaded only on the carboxylic-acid-functionalized nanodomains prepared via hydrolysis. Macroscopic patterning through a photomask via UV irradiation, hydrolysis, sequential zinc cation loading and calcination left a nanodot array of zinc oxide on a selectively UV-shaded region.

  12. Nuclear reactor spring strip grid spacer

    International Nuclear Information System (INIS)

    Patterson, J.F.; Flora, B.S.

    1978-01-01

    A bimetallic grid spacer is described comprising a grid structure of zircaloy formed by intersecting striplike members which define fuel element openings for receiving fuel elements and spring strips made of Inconel positioned within the grid structure for cooperating with the fuel elements to maintain them in their desired position. A plurality of these spring strips extend longitudinally between sides of the grid structure, being locked in position by the grid retaining strips. The fuel rods, which are disposed in the fuel openings formed in the grid structure, are positioned by means of the springs associated with the spring strips and a plurality of dimples which extend from the zircaloy grid structure into the openings. In one embodiment the strips are disposed in a plurality of arrays with those spring strip arrays situated in opposing diagonal quadrants of the grid structure extending in the same direction and adjacent spring strip arrays in each half of the spacer extending in relatively perpendicular directions. Other variations of the spring strip arrangements for a particular fuel design are disclosed herein

  13. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Choi, Yong, E-mail: ychoi.image@gmail.com [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Hong, Key Jo [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kang, Jihoon [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Jung, Jin Ho [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Huh, Youn Suk [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Lim, Hyun Keong; Kim, Sang Su [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kim, Byung-Tae [Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Chung, Yonghyun [Department of Radiological Science, Yonsei University College of Health Science, 234 Meaji, Heungup Wonju, Kangwon-Do 220-710 (Korea, Republic of)

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm) coupled 1:1 with LYSO scintillators (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm Multiplication-Sign 20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and

  14. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications

    Energy Technology Data Exchange (ETDEWEB)

    Wlodarczyk, Krystian L., E-mail: K.L.Wlodarczyk@hw.ac.uk; Maier, Robert R. J.; Hand, Duncan P. [Institute of Photonics and Quantum Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Bryce, Emma; Hutson, David; Kirk, Katherine [School of Engineering and Science, University of the West of Scotland, Paisley PA1 2BE (United Kingdom); Schwartz, Noah; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil [UK Astronomy Technology Centre, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Strachan, Mel [Institute of Photonics and Quantum Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); UK Astronomy Technology Centre, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom)

    2014-02-15

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  15. A High Performance Backend for Array-Oriented Programming on Next-Generation Processing Units

    DEFF Research Database (Denmark)

    Lund, Simon Andreas Frimann

    The financial crisis, which started in 2008, spawned the HIPERFIT research center as a preventive measure against future financial crises. The goal of prevention is to be met by improving mathematical models for finance, the verifiable description of them in domain-specific languages...... and the efficient execution of them on high performance systems. This work investigates the requirements for, and the implementation of, a high performance backend supporting these goals. This involves an outline of the hardware available today, in the near future and how to program it for high performance....... The main challenge is to bridge the gaps between performance, productivity and portability. A declarative high-level array-oriented programming model is explored to achieve this goal and a backend implemented to support it. Different strategies to the backend design and application of optimizations...

  16. Primary Dendrite Array Morphology: Observations from Ground-based and Space Station Processed Samples

    Science.gov (United States)

    Tewari, Surendra; Rajamure, Ravi; Grugel, Richard; Erdmann, Robert; Poirier, David

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, "Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST)". Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K/cm (MICAST6) and 28 K/cm (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 micron/s) and a speed decrease (MICAST7-from 20 to 10 micron/s). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  17. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  18. Optimizing Grid Patterns on Photovoltaic Cells

    Science.gov (United States)

    Burger, D. R.

    1984-01-01

    CELCAL computer program helps in optimizing grid patterns for different photovoltaic cell geometries and metalization processes. Five different powerloss phenomena associated with front-surface metal grid pattern on photovoltaic cells.

  19. Grid pulser

    International Nuclear Information System (INIS)

    Jansweijer, P.P.M.; Es, J.T. van.

    1990-01-01

    This report describes a fast pulse generator. This generator delivers a high-voltage pulse of at most 6000 V with a rise time being smaller than 50 nS. this results in a slew rate of more than 120.000 volts per μS. The pulse generator is used to control the grid of the injector of the electron accelerator MEA. The capacity of this grid is about 60 pF. In order to charge this capacity up to 6000 volts in 50 nS a current of 8 ampere is needed. The maximal pulse length is 50 μS with a repeat frequency of 500 Hz. During this 50 μS the stability of the pulse amplitude is better than 0.1%. (author). 20 figs

  20. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    Science.gov (United States)

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We

  1. The grid

    OpenAIRE

    Morrad, Annie; McArthur, Ian

    2018-01-01

    Project Anywhere Project title: The Grid   Artists: Annie Morrad: Artist/Senior Lecturer, University of Lincoln, School of Film and Media, Lincoln, UK   Dr Ian McArthur: Hybrid Practitioner/Senior Lecturer, UNSW Art & Design, UNSW Australia, Sydney, Australia   Annie Morrad is a London-based artist and musician and senior lecturer at the University of Lincoln, UK. Dr Ian McArthur is a Sydney-based hybrid practitione...

  2. Double-sided anodic titania nanotube arrays: a lopsided growth process.

    Science.gov (United States)

    Sun, Lidong; Zhang, Sam; Sun, Xiao Wei; Wang, Xiaoyan; Cai, Yanli

    2010-12-07

    In the past decade, the pore diameter of anodic titania nanotubes was reported to be influenced by a number of factors in organic electrolyte, for example, applied potential, working distance, water content, and temperature. All these were closely related to potential drop in the organic electrolyte. In this work, the essential role of electric field originating from the potential drop was directly revealed for the first time using a simple two-electrode anodizing method. Anodic titania nanotube arrays were grown simultaneously at both sides of a titanium foil, with tube length being longer at the front side than that at the back side. This lopsided growth was attributed to the higher ionic flux induced by electric field at the front side. Accordingly, the nanotube length was further tailored to be comparable at both sides by modulating the electric field. These results are promising to be used in parallel configuration dye-sensitized solar cells, water splitting, and gas sensors, as a result of high surface area produced by the double-sided architecture.

  3. Single event upset susceptibilities of latchup immune CMOS process programmable gate arrays

    Science.gov (United States)

    Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.; Tsubota, T. K.

    Single event upsets (SEU) and latchup susceptibilities of complementary metal oxide semiconductor programmable gate arrays (CMOS PPGA's) were measured at the Lawrence Berkeley Laboratory 88-in. cyclotron facility with Xe (603 MeV), Cu (290 MeV), and Ar (180 MeV) ion beams. The PPGA devices tested were those which may be used in space. Most of the SEU measurements were taken with a newly constructed tester called the Bus Access Storage and Comparison System (BASACS) operating via a Macintosh II computer. When BASACS finds that an output does not match a prerecorded pattern, the state of all outputs, position in the test cycle, and other necessary information is transmitted and stored in the Macintosh. The upset rate was kept between 1 and 3 per second. After a sufficient number of errors are stored, the test is stopped and the total fluence of particles and total errors are recorded. The device power supply current was closely monitored to check for occurrence of latchup. Results of the tests are presented, indicating that some of the PPGA's are good candidates for selected space applications.

  4. High-resolution focal plane array IR detection modules and digital signal processing technologies at AIM

    Science.gov (United States)

    Cabanski, Wolfgang A.; Breiter, Rainer; Koch, R.; Mauk, Karl-Heinz; Rode, Werner; Ziegler, Johann; Eberhardt, Kurt; Oelmaier, Reinhard; Schneider, Harald; Walther, Martin

    2000-07-01

    Full video format focal plane array (FPA) modules with up to 640 X 512 pixels have been developed for high resolution imaging applications in either mercury cadmium telluride (MCT) mid wave (MWIR) infrared (IR) or platinum silicide (PtSi) and quantum well infrared photodetector (QWIP) technology as low cost alternatives to MCT for high performance IR imaging in the MWIR or long wave spectral band (LWIR). For the QWIP's, a new photovoltaic technology was introduced for improved NETD performance and higher dynamic range. MCT units provide fast frame rates > 100 Hz together with state of the art thermal resolution NETD hardware platforms and software for image visualization and nonuniformity correction including scene based self learning algorithms had to be developed to accomplish for the high data rates of up to 18 M pixels/s with 14-bit deep data, allowing to take into account nonlinear effects to access the full NETD by accurate reduction of residual fixed pattern noise. The main features of these modules are summarized together with measured performance data for long range detection systems with moderately fast to slow F-numbers like F/2.0 - F/3.5. An outlook shows most recent activities at AIM, heading for multicolor and faster frame rate detector modules based on MCT devices.

  5. Grid interoperability: joining grid information systems

    International Nuclear Information System (INIS)

    Flechl, M; Field, L

    2008-01-01

    A grid is defined as being 'coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations'. Over recent years a number of grid projects, many of which have a strong regional presence, have emerged to help coordinate institutions and enable grids. Today, we face a situation where a number of grid projects exist, most of which are using slightly different middleware. Grid interoperation is trying to bridge these differences and enable Virtual Organizations to access resources at the institutions independent of their grid project affiliation. Grid interoperation is usually a bilateral activity between two grid infrastructures. Recently within the Open Grid Forum, the Grid Interoperability Now (GIN) Community Group is trying to build upon these bilateral activities. The GIN group is a focal point where all the infrastructures can come together to share ideas and experiences on grid interoperation. It is hoped that each bilateral activity will bring us one step closer to the overall goal of a uniform grid landscape. A fundamental aspect of a grid is the information system, which is used to find available grid services. As different grids use different information systems, interoperation between these systems is crucial for grid interoperability. This paper describes the work carried out to overcome these differences between a number of grid projects and the experiences gained. It focuses on the different techniques used and highlights the important areas for future standardization

  6. Optical technology for microwave applications VI and optoelectronic signal processing for phased-array antennas III; Proceedings of the Meeting, Orlando, FL, Apr. 20-23, 1992

    Science.gov (United States)

    Yao, Shi-Kay; Hendrickson, Brian M.

    The following topics related to optical technology for microwave applications are discussed: advanced acoustooptic devices, signal processing device technologies, optical signal processor technologies, microwave and optomicrowave devices, advanced lasers and sources, wideband electrooptic modulators, and wideband optical communications. The topics considered in the discussion of optoelectronic signal processing for phased-array antennas include devices, signal processing, and antenna systems.

  7. Proceedings of the Adaptive Sensor Array Processing (ASAP) Workshop 12-14 March 1997. Volume 1

    National Research Council Canada - National Science Library

    O'Donovan, G

    1997-01-01

    ... was included in the first and third ASAP workshops, ASAP has traditionally concentrated on radar core topics include airborne radar testbed systems, space time adaptive processing, multipath jamming...

  8. Hydrogen Detection With a Gas Sensor ArrayProcessing and Recognition of Dynamic Responses Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Gwiżdż Patryk

    2015-03-01

    Full Text Available An array consisting of four commercial gas sensors with target specifications for hydrocarbons, ammonia, alcohol, explosive gases has been constructed and tested. The sensors in the array operate in the dynamic mode upon the temperature modulation from 350°C to 500°C. Changes in the sensor operating temperature lead to distinct resistance responses affected by the gas type, its concentration and the humidity level. The measurements are performed upon various hydrogen (17-3000 ppm, methane (167-3000 ppm and propane (167-3000 ppm concentrations at relative humidity levels of 0-75%RH. The measured dynamic response signals are further processed with the Discrete Fourier Transform. Absolute values of the dc component and the first five harmonics of each sensor are analysed by a feed-forward back-propagation neural network. The ultimate aim of this research is to achieve a reliable hydrogen detection despite an interference of the humidity and residual gases.

  9. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  10. Physical features of the wire-array Z-pinch plasmas imploding process

    International Nuclear Information System (INIS)

    Gao Chunming; Feng Kaiming

    2001-01-01

    In the process of research on controlled fusion reactors, scientists found that the Z-pinch plasma can produce very strong X-rays, comparing with other X-ray sources. In researching the process of imploding, the snowplow model and Haines model are introduced and proved. About amassing X-rays, several ways of discharging X-rays are carefully analyzed and the relative theories are proved. In doing simulations, the one dimension model is used in writing codes, the match relationships are calculated and the process of imploding is also simulated. Some useful and reasonable results are obtained

  11. A Module Experimental Process System Development Unit (MEPSDU). [flat plate solar arrays

    Science.gov (United States)

    1981-01-01

    The development of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which meet the price goal in 1986 of 70 cents or less per Watt peak is described. The major accomplishments include (1) an improved AR coating technique; (2) the use of sand blast back clean-up to reduce clean up costs and to allow much of the Al paste to serve as a back conductor; and (3) the development of wave soldering for use with solar cells. Cells were processed to evaluate different process steps, a cell and minimodule test plan was prepared and data were collected for preliminary Samics cost analysis.

  12. The Grid PC farm

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    Housed in the CERN Computer Centre, these banks of computers process and store data produced on the CERN systems. When the LHC starts operation in 2008, it will produce enough data every year to fill a stack of CDs 20 km tall. To handle this huge amount of data, CERN has also developed the Grid, allowing processing power to be shared between computer centres around the world.

  13. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    Science.gov (United States)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs

  14. COSMIC INFRARED BACKGROUND FLUCTUATIONS IN DEEP SPITZER INFRARED ARRAY CAMERA IMAGES: DATA PROCESSING AND ANALYSIS

    International Nuclear Information System (INIS)

    Arendt, Richard G.; Kashlinsky, A.; Moseley, S. H.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale (∼>30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the ∼1-5 μm mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low (∼>1 nW m -2 sr -1 at 3-5 μm), and thus consistent with current γ-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs inhabited by the populations producing these

  15. Grid sleeve bulge tool

    International Nuclear Information System (INIS)

    Phillips, W.D.; Vaill, R.E.

    1980-01-01

    An improved grid sleeve bulge tool is designed for securing control rod guide tubes to sleeves brazed in a fuel assembly grid. The tool includes a cylinder having an outer diameter less than the internal diameter of the control rod guide tubes. The walls of the cylinder are cut in an axial direction along its length to provide several flexible tines or ligaments. These tines are similar to a fork except they are spaced in a circumferential direction. The end of each alternate tine is equipped with a semispherical projection which extends radially outwardly from the tine surface. A ram or plunger of generally cylindrical configuration and about the same length as the cylinder is designed to fit in and move axially of the cylinder and thereby force the tined projections outwardly when the ram is pulled into the cylinder. The ram surface includes axially extending grooves and plane surfaces which are complimentary to the inner surfaces formed on the tines on the cylinder. As the cylinder is inserted into a control rod guide tube, and the projections on the cylinder placed in a position just below or above a grid strap, the ram is pulled into the cylinder, thus moving the tines and the projections thereon outwardly into contact with the sleeve, to plastically deform both the sleeve and the control rod guide tube, and thereby form four bulges which extend outwardly from the sleeve surface and beyond the outer periphery of the grid peripheral strap. This process is then repeated at the points above the grid to also provide for outwardly projecting surfaces, the result being that the grid is accurately positioned on and mechanically secured to the control rod guide tubes which extend the length of a fuel assembly

  16. Image processing system design for microcantilever-based optical readout infrared arrays

    Science.gov (United States)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  17. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  18. Numerical microstructural analysis of automotive-grade steels when joined with an array of welding processes

    International Nuclear Information System (INIS)

    Gould, J.E.; Khurana, S.P.; Li, T.

    2004-01-01

    Weld strength, formability, and impact resistance for joints on automotive steels is dependent on the underlying microstructure. A martensitic weld area is often a precursor to reduced mechanical performance. In this paper, efforts are made to predict underlying joint microstructures for a range of processing approaches, steel types, and gauges. This was done first by calculating cooling rates for some typical automotive processes [resistance spot welding (RSW), resistance mash seam welding (RMSEW), laser beam welding (LBW), and gas metal arc welding (GMAW)]. Then, critical cooling rates for martensite formation were calculated for a range of automotive steels using an available thermodynamically based phase transformation model. These were then used to define combinations of process type, steel type, and gauge where welds could be formed avoiding martensite in the weld area microstructure

  19. Flat-plate solar array project process development area process research of non-CZ silicon material

    Science.gov (United States)

    1985-01-01

    Three sets of samples were laser processed and then cell processed. The laser processing was carried out on P-type and N-type web at laser power levels from 0.5 joule/sq cm to 2.5 joule/sq cm. Six different liquid dopants were tested (3 phosphorus dopants, 2 boron dopants, 1 aluminum dopant). The laser processed web strips were fabricated into solar cells immediately after laser processing and after various annealing cycles. Spreading resistance measurements made on a number of these samples indicate that the N(+)P (phosphorus doped) junction is approx. 0.2 micrometers deep and suitable for solar cells. However, the P(+)N (or P(+)P) junction is very shallow ( 0.1 micrometers) with a low surface concentration and resulting high resistance. Due to this effect, the fabricated cells are of low efficiency. The maximum efficiency attained was 9.6% on P-type web after a 700 C anneal. The main reason for the low efficiency was a high series resistance in the cell due to a high resistance back contact.

  20. OGC and Grid Interoperability in enviroGRIDS Project

    Science.gov (United States)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and

  1. Grid Generation Techniques Utilizing the Volume Grid Manipulator

    Science.gov (United States)

    Alter, Stephen J.

    1998-01-01

    This paper presents grid generation techniques available in the Volume Grid Manipulation (VGM) code. The VGM code is designed to manipulate existing line, surface and volume grids to improve the quality of the data. It embodies an easy to read rich language of commands that enables such alterations as topology changes, grid adaption and smoothing. Additionally, the VGM code can be used to construct simplified straight lines, splines, and conic sections which are common curves used in the generation and manipulation of points, lines, surfaces and volumes (i.e., grid data). These simple geometric curves are essential in the construction of domain discretizations for computational fluid dynamic simulations. By comparison to previously established methods of generating these curves interactively, the VGM code provides control of slope continuity and grid point-to-point stretchings as well as quick changes in the controlling parameters. The VGM code offers the capability to couple the generation of these geometries with an extensive manipulation methodology in a scripting language. The scripting language allows parametric studies of a vehicle geometry to be efficiently performed to evaluate favorable trends in the design process. As examples of the powerful capabilities of the VGM code, a wake flow field domain will be appended to an existing X33 Venturestar volume grid; negative volumes resulting from grid expansions to enable flow field capture on a simple geometry, will be corrected; and geometrical changes to a vehicle component of the X33 Venturestar will be shown.

  2. Optimizing laser beam profiles using micro-lens arrays for efficient material processing: applications to solar cells

    Science.gov (United States)

    Hauschild, Dirk; Homburg, Oliver; Mitra, Thomas; Ivanenko, Mikhail; Jarczynski, Manfred; Meinschien, Jens; Bayer, Andreas; Lissotschenko, Vitalij

    2009-02-01

    High power laser sources are used in various production tools for microelectronic products and solar cells, including the applications annealing, lithography, edge isolation as well as dicing and patterning. Besides the right choice of the laser source suitable high performance optics for generating the appropriate beam profile and intensity distribution are of high importance for the right processing speed, quality and yield. For industrial applications equally important is an adequate understanding of the physics of the light-matter interaction behind the process. In advance simulations of the tool performance can minimize technical and financial risk as well as lead times for prototyping and introduction into series production. LIMO has developed its own software founded on the Maxwell equations taking into account all important physical aspects of the laser based process: the light source, the beam shaping optical system and the light-matter interaction. Based on this knowledge together with a unique free-form micro-lens array production technology and patented micro-optics beam shaping designs a number of novel solar cell production tool sub-systems have been built. The basic functionalities, design principles and performance results are presented with a special emphasis on resilience, cost reduction and process reliability.

  3. ZnO nanorods arrays with Ag nanoparticles on the (002) plane derived by liquid epitaxy growth and electrodeposition process

    International Nuclear Information System (INIS)

    Yin Xingtian; Que Wenxiu; Shen Fengyu

    2011-01-01

    Well-aligned ZnO nanorods (NRs) arrays with Ag nanoparticles (NPs) on the (002) plane are obtained by combining a liquid epitaxy technique with an electrodeposition process. Cyclic voltammetry study is employed to understand the electrochemical behaviors of the electrodeposition system, and potentiostatic method is employed to deposit silver NPs on the ZnO NRs in the electrolyte with an Ag + concentration of 1 mM. X-ray diffraction analysis is used to study the crystalline properties of the as-prepared samples, and energy dispersive X-ray is adopted to confirm the composition at the surface of the deposited samples. Results indicate only a small quantity of silver can be deposited on the surface of the samples. Effect of the deposition potential and time on the morphological properties of the resultant Ag NPs/ZnO NRs are investigated in detail. Scanning electron microscopy images and transmission electron microscopy images indicate that the Ag NPs deposited on the (002) plane of the ZnO NRs with a large dispersion in diameter can be obtained by a single potentiostatic deposition process, while dense Ag NPs with a much smaller diameter dispersion on the top of the ZnO NRs, most of which locate on the conical tip of the ZnO NRs, can be obtained by a two-potentiostatic deposition process, The mechanism of this deposition process is also suggested.

  4. Data privacy for the smart grid

    CERN Document Server

    Herold, Rebecca

    2015-01-01

    The Smart Grid and PrivacyWhat Is the Smart Grid? Changes from Traditional Energy Delivery Smart Grid Possibilities Business Model Transformations Emerging Privacy Risks The Need for Privacy PoliciesPrivacy Laws, Regulations, and Standards Privacy-Enhancing Technologies New Privacy Challenges IOT Big Data What Is the Smart Grid?Market and Regulatory OverviewTraditional Electricity Business SectorThe Electricity Open Market Classifications of Utilities Rate-Making ProcessesElectricity Consumer

  5. gCube Grid services

    CERN Document Server

    Andrade, Pedro

    2008-01-01

    gCube is a service-based framework for eScience applications requiring collaboratory, on-demand, and intensive information processing. It provides to these communities Virtual Research Environments (VREs) to support their activities. gCube is build on top of standard technologies for computational Grids, namely the gLite middleware. The software was produced by the DILIGENT project and will continue to be supported and further developed by the D4Science project. gCube reflects within its name a three-sided interpretation of the Grid vision of resource sharing: sharing of computational resources, sharing of structured data, and sharing of application services. As such, gCube embodies the defining characteristics of computational Grids, data Grids, and virtual data Grids. Precisely, it builds on gLite middleware for managing distributed computations and unstructured data, includes dedicated services for managing data and metadata, provides services for distributed information retrieval, allows the orchestration...

  6. Smart grid: hope or hype?

    DEFF Research Database (Denmark)

    Lunde, Morten; Røpke, Inge; Heiskanen, Eva

    2016-01-01

    how their (intentional or unintentional) choices serve to create or maintain certain boundaries in smart grid development: for example, an exclusive focus on electricity within the broader context of a sustainable energy system. As serious investment starts being made in the smart grid, concepts like......The smart grid is an important but ambiguous element in the future transition of the European energy system. The current paper unpacks one influential national vision of the smart grid to identify what kinds of expectations guide the work of smart grid innovators and how the boundaries of the smart...... research and development and to attract new players into the field. A scenario process such as that demonstrated in this article can serve to articulate some of these implicit assumptions and help actors to navigate the ongoing transition. On the basis of our analysis, European policy makers might consider...

  7. Grid attacks avian flu

    CERN Multimedia

    2006-01-01

    During April, a collaboration of Asian and European laboratories analysed 300,000 possible drug components against the avian flu virus H5N1 using the EGEE Grid infrastructure. Schematic presentation of the avian flu virus.The distribution of the EGEE sites in the world on which the avian flu scan was performed. The goal was to find potential compounds that can inhibit the activities of an enzyme on the surface of the influenza virus, the so-called neuraminidase, subtype N1. Using the Grid to identify the most promising leads for biological tests could speed up the development process for drugs against the influenza virus. Co-ordinated by CERN and funded by the European Commission, the EGEE project (Enabling Grids for E-sciencE) aims to set up a worldwide grid infrastructure for science. The challenge of the in silico drug discovery application is to identify those molecules which can dock on the active sites of the virus in order to inhibit its action. To study the impact of small scale mutations on drug r...

  8. Development of a Process for a High Capacity Arc Heater Production of Silicon for Solar Arrays

    Science.gov (United States)

    Reed, W. H.

    1979-01-01

    A program was established to develop a high temperature silicon production process using existing electric arc heater technology. Silicon tetrachloride and a reductant (sodium) are injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction is expected to occur and proceed essentially to completion, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection were developed. Included in this report are: test system preparation; testing; injection techniques; kinetics; reaction demonstration; conclusions; and the project status.

  9. Low cost silicon solar array project large area silicon sheet task: Silicon web process development

    Science.gov (United States)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.

  10. Low cost solar array project production process and equipment task: A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Several major modifications were made to the design presented at the PDR. The frame was deleted in favor of a "frameless" design which will provide a substantially improved cell packing factor. Potential shaded cell damage resulting from operation into a short circuit can be eliminated by a change in the cell series/parallel electrical interconnect configuration. The baseline process sequence defined for the MEPSON was refined and equipment design and specification work was completed. SAMICS cost analysis work accelerated, format A's were prepared and computer simulations completed. Design work on the automated cell interconnect station was focused on bond technique selection experiments.

  11. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-01

    to develop novel co-prime sampling and array design strategies that achieve high-resolution estimation of spectral power distributions and signal...by the array geometry and the frequency offset. We overcome this limitation by introducing a novel sparsity-based multi-target localization approach...estimation using a sparse uniform linear array with two CW signals of co-prime frequencies,” IEEE International Workshop on Computational Advances

  12. Development of a process for high capacity arc heater production of silicon for solar arrays

    Science.gov (United States)

    Meyer, T. N.

    1980-01-01

    A high temperature silicon production process using existing electric arc heater technology is discussed. Silicon tetrachloride and a reductant, liquid sodium, were injected into an arc heated mixture of hydrogen and argon. Under these high temperature conditions, a very rapid reaction occurred, yielding silicon and gaseous sodium chloride. Techniques for high temperature separation and collection of the molten silicon were developed. The desired degree of separation was not achieved. The electrical, control and instrumentation, cooling water, gas, SiCl4, and sodium systems are discussed. The plasma reactor, silicon collection, effluent disposal, the gas burnoff stack, and decontamination and safety are also discussed. Procedure manuals, shakedown testing, data acquisition and analysis, product characterization, disassembly and decontamination, and component evaluation are reviewed.

  13. Site Effect Assessment of Earthquake Ground Motion Based on Advanced Data Processing of Microtremor Array Measurements

    Science.gov (United States)

    Liu, L.; He, K.; Mehl, R.; Wang, W.; Chen, Q.

    2008-12-01

    High-resolution near-surface geologic information is essential for earthquake ground motion prediction. The near-surface geology forms the critical constituent to influence seismic wave propagation, which is known as the local site effects. We have collected microtremor data over 1000 sites in Beijing area for extracting the much needed earthquake engineering parameters (primarily sediment thickness, with the shear wave velocity profiling at a few important control points) in this heavily populated urban area. Advanced data processing algorithms are employed in various stages in assessing the local site effect on earthquake ground motion. First, we used the empirical mode decomposition (EMD), also known as the Hilbert-Huang transform (HHT), to enhance the microtremor data analysis by excluding the local transients and continuous monochromic industrial noises. With this enhancement we have significantly increased the number of data points to be useful in delineating sediment thickness in this area. Second, we have used the cross-correlation of microtremor data acquired for the pairs of two adjacent sites to generate a 'pseudo-reflection' record, which can be treated as the Green function of the 1D layered earth model at the site. The sediment thickness information obtained this way is also consistent with the results obtained by the horizontal to vertical spectral ratio method (HVSR). For most sites in this area, we can achieve 'self consistent' results among different processing skechems regarding to the sediment thickness - the fundamental information to be used in assessing the local site effect. Finally, the pseudo-spectral time domain method was used to simulate the seismic wave propagation caused by a scenario earthquake in this area - the 1679 M8 Sanhe-pinggu earthquake. The characteristics of the simulated earthquake ground motion have found a general correlation with the thickness of the sediments in this area. And more importantly, it is also in agreement

  14. Flat-plate solar array project process development area: Process research of non-CZ silicon material

    Science.gov (United States)

    Campbell, R. B.

    1986-01-01

    Several different techniques to simultaneously diffuse the front and back junctions in dendritic web silicon were investigated. A successful simultaneous diffusion reduces the cost of the solar cell by reducing the number of processing steps, the amount of capital equipment, and the labor cost. The three techniques studied were: (1) simultaneous diffusion at standard temperatures and times using a tube type diffusion furnace or a belt furnace; (2) diffusion using excimer laser drive-in; and (3) simultaneous diffusion at high temperature and short times using a pulse of high intensity light as the heat source. The use of an excimer laser and high temperature short time diffusion experiment were both more successful than the diffusion at standard temperature and times. The three techniques are described in detail and a cost analysis of the more successful techniques is provided.

  15. Smart grids infrastructure, technology, and solutions

    CERN Document Server

    Borlase, Stuart

    2012-01-01

    What exactly is smart grid? Why is it receiving so much attention? What are utilities, vendors, and regulators doing about it? Answering these questions and more, Smart Grids: Infrastructure, Technology, and Solutions gives readers a clearer understanding of the drivers and infrastructure of one of the most talked-about topics in the electric utility market-smart grid. This book brings together the knowledge and views of a vast array of experts and leaders in their respective fields.Key Features Describes the impetus for change in the electric utility industry Discusses the business drivers, b

  16. A dual-directional light-control film with a high-sag and high-asymmetrical-shape microlens array fabricated by a UV imprinting process

    International Nuclear Information System (INIS)

    Lin, Ta-Wei; Liao, Yunn-Shiuan; Chen, Chi-Feng; Yang, Jauh-Jung

    2008-01-01

    A dual-directional light-control film with a high-sag and high-asymmetric-shape long gapless hexagonal microlens array fabricated by an ultra-violent (UV) imprinting process is presented. Such a lens array is designed by ray-tracing simulation and fabricated by a micro-replication process including gray-scale lithography, electroplating process and UV curing. The shape of the designed lens array is similar to that of a near half-cylindrical lens array with a periodical ripple. The measurement results of a prototype show that the incident lights using a collimated LED with the FWHM of dispersion angle, 12°, are diversified differently in short and long axes. The numerical and experimental results show that the FWHMs of the view angle for angular brightness in long and short axis directions through the long hexagonal lens are about 34.3° and 18.1° and 31° and 13°, respectively. Compared with the simulation result, the errors in long and short axes are about 5% and 16%, respectively. Obviously, the asymmetric gapless microlens array can realize the aim of the controlled asymmetric angular brightness. Such a light-control film can be used as a power saving screen compared with convention diffusing film for the application of a rear projection display

  17. An open data repository and a data processing software toolset of an equivalent Nordic grid model matched to historical electricity market data.

    Science.gov (United States)

    Vanfretti, Luigi; Olsen, Svein H; Arava, V S Narasimham; Laera, Giuseppe; Bidadfar, Ali; Rabuzin, Tin; Jakobsen, Sigurd H; Lavenius, Jan; Baudette, Maxime; Gómez-López, Francisco J

    2017-04-01

    This article presents an open data repository, the methodology to generate it and the associated data processing software developed to consolidate an hourly snapshot historical data set for the year 2015 to an equivalent Nordic power grid model (aka Nordic 44), the consolidation was achieved by matching the model׳s physical response w.r.t historical power flow records in the bidding regions of the Nordic grid that are available from the Nordic electricity market agent, Nord Pool. The model is made available in the form of CIM v14, Modelica and PSS/E (Siemens PTI) files. The Nordic 44 model in Modelica and PSS/E were first presented in the paper titled "iTesla Power Systems Library (iPSL): A Modelica library for phasor time-domain simulations" (Vanfretti et al., 2016) [1] for a single snapshot. In the digital repository being made available with the submission of this paper (SmarTSLab_Nordic44 Repository at Github, 2016) [2], a total of 8760 snapshots (for the year 2015) that can be used to initialize and execute dynamic simulations using tools compatible with CIM v14, the Modelica language and the proprietary PSS/E tool are provided. The Python scripts to generate the snapshots (processed data) are also available with all the data in the GitHub repository (SmarTSLab_Nordic44 Repository at Github, 2016) [2]. This Nordic 44 equivalent model was also used in iTesla project (iTesla) [3] to carry out simulations within a dynamic security assessment toolset (iTesla, 2016) [4], and has been further enhanced during the ITEA3 OpenCPS project (iTEA3) [5]. The raw, processed data and output models utilized within the iTesla platform (iTesla, 2016) [4] are also available in the repository. The CIM and Modelica snapshots of the "Nordic 44" model for the year 2015 are available in a Zenodo repository.

  18. Process Research On Polycrystalline Silicon Material (PROPSM). [flat plate solar array project

    Science.gov (United States)

    Culik, J. S.

    1983-01-01

    The performance-limiting mechanisms in large-grain (greater than 1 to 2 mm in diameter) polycrystalline silicon solar cells were investigated by fabricating a matrix of 4 sq cm solar cells of various thickness from 10 cm x 10 cm polycrystalline silicon wafers of several bulk resistivities. Analysis of the illuminated I-V characteristics of these cells suggests that bulk recombination is the dominant factor limiting the short-circuit current. The average open-circuit voltage of the polycrystalline solar cells is 30 to 70 mV lower than that of co-processed single-crystal cells; the fill-factor is comparable. Both open-circuit voltage and fill-factor of the polycrystalline cells have substantial scatter that is not related to either thickness or resistivity. This implies that these characteristics are sensitive to an additional mechanism that is probably spatial in nature. A damage-gettering heat-treatment improved the minority-carrier diffusion length in low lifetime polycrystalline silicon, however, extended high temperature heat-treatment degraded the lifetime.

  19. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    Science.gov (United States)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  20. Demonstration of array eddy current technology for real-time monitoring of laser powder bed fusion additive manufacturing process

    Science.gov (United States)

    Todorov, Evgueni; Boulware, Paul; Gaah, Kingsley

    2018-03-01

    Nondestructive evaluation (NDE) at various fabrication stages is required to assure quality of feedstock and solid builds. Industry efforts are shifting towards solutions that can provide real-time monitoring of additive manufacturing (AM) fabrication process layer-by-layer while the component is being built to reduce or eliminate dependence on post-process inspection. Array eddy current (AEC), electromagnetic NDE technique was developed and implemented to directly scan the component without physical contact with the powder and fused layer surfaces at elevated temperatures inside a LPBF chamber. The technique can detect discontinuities, surface irregularities, and undesirable metallurgical phase transformations in magnetic and nonmagnetic conductive materials used for laser fusion. The AEC hardware and software were integrated with the L-PBF test bed. Two layer-by-layer tests of Inconel 625 coupons with AM built discontinuities and lack of fusion were conducted inside the L-PBF chamber. The AEC technology demonstrated excellent sensitivity to seeded, natural surface, and near-surface-embedded discontinuities, while also detecting surface topography. The data was acquired and imaged in a layer-by-layer sequence demonstrating the real-time monitoring capabilities of this new technology.

  1. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    Science.gov (United States)

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  2. Silver front electrode grids for ITO-free all printed polymer solar cells with embedded and raised topographies, prepared by thermal imprint, flexographic and inkjet roll-to-roll processes.

    Science.gov (United States)

    Yu, Jong-Su; Kim, Inyoung; Kim, Jung-Su; Jo, Jeongdai; Larsen-Olsen, Thue T; Søndergaard, Roar R; Hösel, Markus; Angmo, Dechan; Jørgensen, Mikkel; Krebs, Frederik C

    2012-09-28

    Semitransparent front electrodes for polymer solar cells, that are printable and roll-to-roll processable under ambient conditions using different approaches, are explored in this report. The excellent smoothness of indium-tin-oxide (ITO) electrodes has traditionally been believed to be difficult to achieve using printed front grids, as surface topographies accumulate when processing subsequent layers, leading to shunts between the top and bottom printed metallic electrodes. Here we demonstrate how aqueous nanoparticle based silver inks can be employed as printed front electrodes using several different roll-to-roll techniques. We thus compare hexagonal silver grids prepared using either roll-to-roll inkjet or roll-to-roll flexographic printing. Both inkjet and flexo grids present a raised topography and were found to perform differently due to only the conductivity of the obtained silver grid. The raised topographies were compared with a roll-to-roll thermally imprinted grid that was filled with silver in a roll-to-roll process, thus presenting an embedded topography. The embedded grid and the flexo grid were found to perform equally well, with the flexographic technique currently presenting the fastest processing and the lowest silver use, whereas the embedded grid presents the maximally achievable optical transparency and conductivity. Polymer solar cells were prepared in the same step, using roll-to-roll slot-die coating of zinc oxide as the electron transport layer, poly-3-hexylthiophene:phenyl-C(61)-butyric acid methyl ester (P3HT:PCBM) as the active layer and poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS) as the top electrode, along with a flat bed screen printed silver grid. The power conversion efficiency (PCE) obtained for large area devices (6 cm(2)) was 1.84%, 0.79% and 1.72%, respectively, for thermally imprinted, inkjet and flexographic silver grids, tested outside under the real sun. Central to all three approaches was that they

  3. Holey carbon micro-arrays for transmission electron microscopy: A microcontact printing approach

    International Nuclear Information System (INIS)

    Chester, David W.; Klemic, James F.; Stern, Eric; Sigworth, Fred J.; Klemic, Kathryn G.

    2007-01-01

    We have used a microcontact printing approach to produce high quality and inexpensive holey carbon micro-arrays. Fabrication involves: (1) micromolding a poly(dimethylsiloxane) (PDMS) elastomer stamp from a microfabricated master that contains the desired array pattern; (2) using the PDMS stamp for microcontact printing a thin sacrificial plastic film that contains an array of holes; (3) floating the plastic film onto TEM grids; (4) evaporating carbon onto the plastic film and (5) removing the sacrificial plastic film. The final holey carbon micro-arrays are ready for use as support films in TEM applications with the fidelity of the original microfabricated pattern. This approach is cost effective as both the master and the stamps have long-term reusability. Arbitrary array patterns can be made with microfabricated masters made through a single-step photolithographic process

  4. The Design of Distributed Micro Grid Energy Storage System

    Science.gov (United States)

    Liang, Ya-feng; Wang, Yan-ping

    2018-03-01

    Distributed micro-grid runs in island mode, the energy storage system is the core to maintain the micro-grid stable operation. For the problems that it is poor to adjust at work and easy to cause the volatility of micro-grid caused by the existing energy storage structure of fixed connection. In this paper, an array type energy storage structure is proposed, and the array type energy storage system structure and working principle are analyzed. Finally, the array type energy storage structure model is established based on MATLAB, the simulation results show that the array type energy storage system has great flexibility, which can maximize the utilization of energy storage system, guarantee the reliable operation of distributed micro-grid and achieve the function of peak clipping and valley filling.

  5. Nimbus-5 ESMR Polar Gridded Brightness Temperatures, Version 2

    Data.gov (United States)

    National Aeronautics and Space Administration — The Nimbus-5 Electrically Scanning Microwave Radiometer (ESMR) data set consists of gridded brightness temperature arrays for the Arctic and Antarctic, spanning 11...

  6. Probabilistic Learning by Rodent Grid Cells.

    Science.gov (United States)

    Cheung, Allen

    2016-10-01

    Mounting evidence shows mammalian brains are probabilistic computers, but the specific cells involved remain elusive. Parallel research suggests that grid cells of the mammalian hippocampal formation are fundamental to spatial cognition but their diverse response properties still defy explanation. No plausible model exists which explains stable grids in darkness for twenty minutes or longer, despite being one of the first results ever published on grid cells. Similarly, no current explanation can tie together grid fragmentation and grid rescaling, which show very different forms of flexibility in grid responses when the environment is varied. Other properties such as attractor dynamics and grid anisotropy seem to be at odds with one another unless additional properties are assumed such as a varying velocity gain. Modelling efforts have largely ignored the breadth of response patterns, while also failing to account for the disastrous effects of sensory noise during spatial learning and recall, especially in darkness. Here, published electrophysiological evidence from a range of experiments are reinterpreted using a novel probabilistic learning model, which shows that grid cell responses are accurately predicted by a probabilistic learning process. Diverse response properties of probabilistic grid cells are statistically indistinguishable from rat grid cells across key manipulations. A simple coherent set of probabilistic computations explains stable grid fields in darkness, partial grid rescaling in resized arenas, low-dimensional attractor grid cell dynamics, and grid fragmentation in hairpin mazes. The same computations also reconcile oscillatory dynamics at the single cell level with attractor dynamics at the cell ensemble level. Additionally, a clear functional role for boundary cells is proposed for spatial learning. These findings provide a parsimonious and unified explanation of grid cell function, and implicate grid cells as an accessible neuronal population

  7. Virtual Machine Lifecycle Management in Grid and Cloud Computing

    OpenAIRE

    Schwarzkopf, Roland

    2015-01-01

    Virtualization is the foundation for two important technologies: Virtualized Grid and Cloud Computing. Virtualized Grid Computing is an extension of the Grid Computing concept introduced to satisfy the security and isolation requirements of commercial Grid users. Applications are confined in virtual machines to isolate them from each other and the data they process from other users. Apart from these important requirements, Virtual...

  8. Support grid for fuel elements in a nuclear reactor

    International Nuclear Information System (INIS)

    Finch, L.M.

    1977-01-01

    A support grid is provided for holding nuclear fuel rods in a rectangular array. Intersecting sheet metal strips are interconnected using opposing slots in the strips to form a rectangular cellular grid structure for engaging the sides of a multiplicity of fuel rods. Spring and dimple supports for engaging fuel and guide rods extending through each cell in the support grid are formed in the metal strips with the springs thus formed being characterized by nonlinear spring rates

  9. Nuclear reactor spring strip grid spacer

    International Nuclear Information System (INIS)

    Patterson, J.F.; Flora, B.S.

    1980-01-01

    An improved and novel grid spacer for maintaining the fuel rods of a nuclear reactor fuel assembly in substantially parallel array is described. The invention provides for spring strips to maintain the fuel elements in their desired orientation which have more positive alignment than previous types while allowing greater flexibility to counterbalance the effects of differential thermal expansion. (UK)

  10. 78 FR 9678 - Multi-stakeholder Process To Develop a Voluntary Code of Conduct for Smart Grid Data Privacy

    Science.gov (United States)

    2013-02-11

    ... providing consumer energy use services. DATES: Tuesday, February 26, 2013 (9:30 a.m. to 4:30 p.m., Eastern... Privacy and Promoting Innovation in the Global Digital Economy \\2\\ (Privacy Blueprint). The Privacy Blueprint outlines a multi-stakeholder process for developing voluntary codes of conduct that, if adopted by...

  11. Grid Integration Research | Wind | NREL

    Science.gov (United States)

    Grid Integration Research Grid Integration Research Researchers study grid integration of wind three wind turbines with transmission lines in the background. Capabilities NREL's grid integration electric power system operators to more efficiently manage wind grid system integration. A photo of

  12. Unpacking Big Systems -- Natural Language Processing Meets Network Analysis. A Study of Smart Grid Development in Denmark

    DEFF Research Database (Denmark)

    Jurowetzki, Roman

    and contained technological trajectories on a national level using a combination of methods from statistical natural language processing, vector space modelling and network analysis. The proposed approach does not aim at replacing the researcher or expert but rather offers the possibility to algorithmically...... in Denmark. Results show that in the explored case it is not mainly new technologies and applications that are driving change but innovative re-combinations of old and new technologies....

  13. Parallel and convergent processing in grid cell, head-direction cell, boundary cell, and place cell networks.

    Science.gov (United States)

    Brandon, Mark P; Koenig, Julie; Leutgeb, Stefan

    2014-03-01

    The brain is able to construct internal representations that correspond to external spatial coordinates. Such brain maps of the external spatial topography may support a number of cognitive functions, including navigation and memory. The neuronal building block of brain maps are place cells, which are found throughout the hippocampus of rodents and, in a lower proportion, primates. Place cells typically fire in one or few restricted areas of space, and each area where a cell fires can range, along the dorsoventral axis of the hippocampus, from 30 cm to at least several meters. The sensory processing streams that give rise to hippocampal place cells are not fully understood, but substantial progress has been made in characterizing the entorhinal cortex, which is the gateway between neocortical areas and the hippocampus. Entorhinal neurons have diverse spatial firing characteristics, and the different entorhinal cell types converge in the hippocampus to give rise to a single, spatially modulated cell type-the place cell. We therefore suggest that parallel information processing in different classes of cells-as is typically observed at lower levels of sensory processing-continues up into higher level association cortices, including those that provide the inputs to hippocampus. WIREs Cogn Sci 2014, 5:207-219. doi: 10.1002/wcs.1272 Conflict of interest: The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. © 2013 John Wiley & Sons, Ltd.

  14. Comparison tomography relocation hypocenter grid search and guided grid search method in Java island

    International Nuclear Information System (INIS)

    Nurdian, S. W.; Adu, N.; Palupi, I. R.; Raharjo, W.

    2016-01-01

    The main data in this research is earthquake data recorded from 1952 to 2012 with 9162 P wave and 2426 events are recorded by 30 stations located around Java island. Relocation hypocenter processed using grid search and guidded grid search method. Then the result of relocation hypocenter become input for tomography pseudo bending inversion process. It can be used to identification the velocity distribution in subsurface. The result of relocation hypocenter by grid search and guided grid search method after tomography process shown in locally and globally. In locally area grid search method result is better than guided grid search according to geological reseach area. But in globally area the result of guided grid search method is better for a broad area because the velocity variation is more diverse than the other one and in accordance with local geological research conditions. (paper)

  15. Mapping of grid faults and grid codes[Wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Iov, F. [Aalborg Univ., Inst. of Energy Technology (Denmark); Hansen, Anca D.; Soerensen, Poul; Cutululis, N.A. [Risoe National Lab. - DTU, Wind Enegy Dept., Roskilde (Denmark)

    2007-06-15

    The objective of this project is to investigate into the consequences of the new grid connection requirements for the fatigue and extreme loads of wind turbines. The goal is also to clarify and define possible new directions in the certification process of power plant wind turbines, namely wind turbines, which participate actively in the stabilisation of power systems. Practical experience shows that there is a need for such investigations. The grid connection requirements for wind turbines have increased significantly during the last 5-10 years. Especially the requirements for wind turbines to stay connected to the grid during and after voltage sags, imply potential challenges in the design of wind turbines. These requirements pose challenges for the design of both the electrical system and the mechanical structure of wind turbines. An overview over the frequency of grid faults and the grid connection requirements in different relevant countries is done in this report. The most relevant study cases for the quantification of the loads' impact on the wind turbines' lifetime are defined. The goal of this report is to present a mapping of different grid fault types and their frequency in different countries. The report provides also a detailed overview of the Low Voltage Ride-Through Capabilities for wind turbines in different relevant countries. The most relevant study cases for the quantification of the loads' impact on the wind turbines' lifetime are defined. (au)

  16. Radar techniques using array antennas

    CERN Document Server

    Wirth, Wulf-Dieter

    2013-01-01

    Radar Techniques Using Array Antennas is a thorough introduction to the possibilities of radar technology based on electronic steerable and active array antennas. Topics covered include array signal processing, array calibration, adaptive digital beamforming, adaptive monopulse, superresolution, pulse compression, sequential detection, target detection with long pulse series, space-time adaptive processing (STAP), moving target detection using synthetic aperture radar (SAR), target imaging, energy management and system parameter relations. The discussed methods are confirmed by simulation stud

  17. ATLAS Grid Workflow Performance Optimization

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment grid workflow system manages routinely 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG. At this scale small improvements in the software and computing performance and workflows can lead to significant resource usage gains. ATLAS is reviewing together with CERN IT experts several typical simulation and data processing workloads for potential performance improvements in terms of memory and CPU usage, disk and network I/O. All ATLAS production and analysis grid jobs are instrumented to collect many performance metrics for detailed statistical studies using modern data analytics tools like ElasticSearch and Kibana. This presentation will review and explain the performance gains of several ATLAS simulation and data processing workflows and present analytics studies of the ATLAS grid workflows.

  18. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  19. Three-Phase Grid-Connected of Photovoltaic Generator Using Nonlinear Control

    DEFF Research Database (Denmark)

    Yahya, A.; El Fadil, H.; Guerrero, Josep M.

    2014-01-01

    This paper proposes a nonlinear control methodology for three phase grid connected of PV generator. It consists of a PV arrays; a voltage source inverter, a grid filter and an electric grid. The controller objectives are threefold: i) ensuring the Maximum power point tracking (MPPT) in the side...... stability analysis and simulation results that the proposed controller meets all the objectives....

  20. Smart grid security

    CERN Document Server

    Goel, Sanjay; Papakonstantinou, Vagelis; Kloza, Dariusz

    2015-01-01

    This book on smart grid security is meant for a broad audience from managers to technical experts. It highlights security challenges that are faced in the smart grid as we widely deploy it across the landscape. It starts with a brief overview of the smart grid and then discusses some of the reported attacks on the grid. It covers network threats, cyber physical threats, smart metering threats, as well as privacy issues in the smart grid. Along with the threats the book discusses the means to improve smart grid security and the standards that are emerging in the field. The second part of the b

  1. Signal Processing Algorithms for Down-Stream Traffic in Next Generation 10 Gbit/s Fixed-Grid Passive Optical Networks

    Directory of Open Access Journals (Sweden)

    Rameez Asif

    2014-01-01

    Full Text Available We have analyzed the impact of digital and optical signal processing algorithms, that is, Volterra equalization (VE, digital backpropagation (BP, and optical phase conjugation with nonlinearity module (OPC-NM, in next generation 10 Gbit/s (also referred to as XG DP-QPSK long haul WDM (fixed-grid passive optical network (PON without midspan repeaters over 120 km standard single mode fiber (SMF link for downstream signals. Due to the compensation of optical Kerr effects, the sensitivity penalty is improved by 2 dB by implementing BP algorithm, 1.5 dB by VE algorithm, and 2.69 dB by OPC-NM. Moreover, with the implementation of NL equalization technique, we are able to get the transmission distance of 126.6 km SMF for the 1 : 1024 split ratio at 5 GHz channel spacing in the nonlinear region.

  2. Negotiation time table and realization timetable in the grid connection process according to KraftNAV; Verhandlungs- und Realisierungsfahrplan im Netzanschlussverfahren nach der KraftNAV

    Energy Technology Data Exchange (ETDEWEB)

    Buchmann, Felix [Sozietaet Alber Buchmann Stefan Rechtsanwaelte, Reutlingen (Germany)

    2010-04-15

    With the ordinance for the regulation of the grid connection from plants of power production (KraftNAV), the conditions of the grid connection were concretized by power station projects with a nominal output of at least 100 MW with an electrical potential of at least 110 kV. Under this aspect, the author of the contribution under consideration reports on the negotiation timetable and realization timetable in the grid connection procedure according to the KraftNAV. The following aspects are treated: Procedure for the grid connection according to KraftNAV in the overview; Promise of the connection and reservation fee; Negotiation timetable; Realization timetable; Requirement timetable; Duties to supply information.

  3. Applying the sequential neural-network approximation and orthogonal array algorithm to optimize the axial-flow cooling system for rapid thermal processes

    International Nuclear Information System (INIS)

    Hung, Shih-Yu; Shen, Ming-Ho; Chang, Ying-Pin

    2009-01-01

    The sequential neural-network approximation and orthogonal array (SNAOA) were used to shorten the cooling time for the rapid cooling process such that the normalized maximum resolved stress in silicon wafer was always below one in this study. An orthogonal array was first conducted to obtain the initial solution set. The initial solution set was treated as the initial training sample. Next, a back-propagation sequential neural network was trained to simulate the feasible domain to obtain the optimal parameter setting. The size of the training sample was greatly reduced due to the use of the orthogonal array. In addition, a restart strategy was also incorporated into the SNAOA so that the searching process may have a better opportunity to reach a near global optimum. In this work, we considered three different cooling control schemes during the rapid thermal process: (1) downward axial gas flow cooling scheme; (2) upward axial gas flow cooling scheme; (3) dual axial gas flow cooling scheme. Based on the maximum shear stress failure criterion, the other control factors such as flow rate, inlet diameter, outlet width, chamber height and chamber diameter were also examined with respect to cooling time. The results showed that the cooling time could be significantly reduced using the SNAOA approach

  4. Online Variable Topology-Type Photovoltaic Grid-Connected Inverter

    DEFF Research Database (Denmark)

    Wu, Fengjiang; Sun, Bo; Duan, Jiandong

    2015-01-01

    In photovoltaic (PV) grid-connected generation system, the key focus is how to expand the generation range of the PV array and enhance the total efficiency of the system. This paper originally derived expressions of the total loss and grid current total harmonics distortions of cascaded inverter...... and H-bridge inverter under the conditions of variable output voltage and power of the PV array. It is proved that, compared with the H-bridge inverter, the operation range of the cascaded inverter is wider, whereas the total loss is larger. Furthermore, a novel online variable topology-type grid......-connected inverter is proposed. A bidirectional power switch is introduced into the conventional cascaded inverter to connect the negative terminals of the PV arrays. When the output voltages of the PV arrays are lower, the proposed inverter works under cascaded inverter mode to obtain wider generation range. When...

  5. Grid generation methods

    CERN Document Server

    Liseikin, Vladimir D

    2010-01-01

    This book is an introduction to structured and unstructured grid methods in scientific computing, addressing graduate students, scientists as well as practitioners. Basic local and integral grid quality measures are formulated and new approaches to mesh generation are reviewed. In addition to the content of the successful first edition, a more detailed and practice oriented description of monitor metrics in Beltrami and diffusion equations is given for generating adaptive numerical grids. Also, new techniques developed by the author are presented, in particular a technique based on the inverted form of Beltrami’s partial differential equations with respect to control metrics. This technique allows the generation of adaptive grids for a wide variety of computational physics problems, including grid clustering to given function values and gradients, grid alignment with given vector fields, and combinations thereof. Applications of geometric methods to the analysis of numerical grid behavior as well as grid ge...

  6. Smart grid business case for private homes

    DEFF Research Database (Denmark)

    Villefrance, Rasmus; Brandt, Jonas; Eriksen, Poul Svante

    2013-01-01

    We describe and consider how the potential of energy savings may drive the penetration of smart grid technology into private homes. We assess the sociological processes which lead to energy savings when the residents have access to smart grid technology. We propose a way to establish a cash flow...... from consumers via electrical distribution companies to smart grid technology providers on the Danish market. Finally, we assess the impact of such a business development on the society, as well as relating the penetration of smart grid technology in private homes to the societal goal of 100% renewable...

  7. Toward a Grid Work flow Formal Composition

    International Nuclear Information System (INIS)

    Hlaoui, Y. B.; BenAyed, L. J.

    2007-01-01

    This paper exposes a new approach for the composition of grid work flow models. This approach proposes an abstract syntax for the UML Activity Diagrams (UML-AD) and a formal foundation for grid work flow composition in form of a work flow algebra based on UML-AD. This composition fulfils the need for collaborative model development particularly the specification and the reduction of the complexity of grid work flow model verification. This complexity has arisen with the increase in scale of grid work flow applications such as science and e-business applications since large amounts of computational resources are required and multiple parties could be involved in the development process and in the use of grid work flows. Furthermore, the proposed algebra allows the definition of work flow views which are useful to limit the access to predefined users in order to ensure the security of grid work flow applications. (Author)

  8. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which r...

  9. Smart grid in China

    DEFF Research Database (Denmark)

    Sommer, Simon; Ma, Zheng; Jørgensen, Bo Nørregaard

    2015-01-01

    China is planning to transform its traditional power grid in favour of a smart grid, since it allows a more economically efficient and a more environmentally friendly transmission and distribution of electricity. Thus, a nationwide smart grid is likely to save tremendous amounts of resources...

  10. Optimization of processing parameters on the controlled growth of c-axis oriented ZnO nanorod arrays

    Energy Technology Data Exchange (ETDEWEB)

    Malek, M. F., E-mail: mfmalek07@gmail.com; Rusop, M., E-mail: rusop@salam.uitm.my [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); NANO-SciTech Centre (NST), Institute of Science (IOS), Universiti Teknologi MARA - UiTM, 40450 Shah Alam, Selangor (Malaysia); Mamat, M. H., E-mail: hafiz-030@yahoo.com [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); Musa, M. Z., E-mail: musa948@gmail.com [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM) Pulau Pinang, Jalan Permatang Pauh, 13500 Permatang Pauh, Pulau Pinang (Malaysia); Saurdi, I., E-mail: saurdy788@gmail.com; Ishak, A., E-mail: ishak@sarawak.uitm.edu.my [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM) Sarawak, Kampus Kota Samarahan, Jalan Meranek, 94300 Kota Samarahan, Sarawak (Malaysia); Alrokayan, Salman A. H., E-mail: dr.salman@alrokayan.com; Khan, Haseeb A., E-mail: khan-haseeb@yahoo.com [Chair of Targeting and Treatment of Cancer Using Nanoparticles, Deanship of Scientific Research, King Saud University (KSU), Riyadh 11451 (Saudi Arabia)

    2016-07-06

    Optimization of the growth time parameter was conducted to synthesize high-quality c-axis ZnO nanorod arrays. The effects of the parameter on the crystal growth and properties were systematically investigated. Our studies confirmed that the growth time influence the properties of ZnO nanorods where the crystallite size of the structures was increased at higher deposition time. Field emission scanning electron microsope analysis confirmed the morphologies structure of the ZnO nanorods. The ZnO nanostructures prepared under the optimized growth conditions showed an intense XRD peak which reveal a higher c-axis oriented ZnO nanorod arrays thus demonstrating the formation of defect free structure.

  11. Biological Inspired Stochastic Optimization Technique (PSO for DOA and Amplitude Estimation of Antenna Arrays Signal Processing in RADAR Communication System

    Directory of Open Access Journals (Sweden)

    Khurram Hammed

    2016-01-01

    Full Text Available This paper presents a stochastic global optimization technique known as Particle Swarm Optimization (PSO for joint estimation of amplitude and direction of arrival of the targets in RADAR communication system. The proposed scheme is an excellent optimization methodology and a promising approach for solving the DOA problems in communication systems. Moreover, PSO is quite suitable for real time scenario and easy to implement in hardware. In this study, uniform linear array is used and targets are supposed to be in far field of the arrays. Formulation of the fitness function is based on mean square error and this function requires a single snapshot to obtain the best possible solution. To check the accuracy of the algorithm, all of the results are taken by varying the number of antenna elements and targets. Finally, these results are compared with existing heuristic techniques to show the accuracy of PSO.

  12. Imaging RF Phased Array Receivers using Optically-Coherent Up-conversion for High Beam-Bandwidth Processing

    Science.gov (United States)

    2017-03-01

    It does so by using an optical lens to perform an inverse spatial Fourier Transform on the up-converted RF signals, thereby rendering a real-time... simultaneous beams or other engineered beam patterns. There are two general approaches to array-based beam forming: digital and analog. In digital beam...of significantly limiting the number of beams that can be formed simultaneously and narrowing the operational bandwidth. An alternate approach that

  13. Array capabilities and future arrays

    International Nuclear Information System (INIS)

    Radford, D.

    1993-01-01

    Early results from the new third-generation instruments GAMMASPHERE and EUROGAM are confirming the expectation that such arrays will have a revolutionary effect on the field of high-spin nuclear structure. When completed, GAMMASHPERE will have a resolving power am order of magnitude greater that of the best second-generation arrays. When combined with other instruments such as particle-detector arrays and fragment mass analysers, the capabilites of the arrays for the study of more exotic nuclei will be further enhanced. In order to better understand the limitations of these instruments, and to design improved future detector systems, it is important to have some intelligible and reliable calculation for the relative resolving power of different instrument designs. The derivation of such a figure of merit will be briefly presented, and the relative sensitivities of arrays currently proposed or under construction presented. The design of TRIGAM, a new third-generation array proposed for Chalk River, will also be discussed. It is instructive to consider how far arrays of Compton-suppressed Ge detectors could be taken. For example, it will be shown that an idealised open-quote perfectclose quotes third-generation array of 1000 detectors has a sensitivity an order of magnitude higher again than that of GAMMASPHERE. Less conventional options for new arrays will also be explored

  14. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    Directory of Open Access Journals (Sweden)

    Peter Kampmann

    2014-04-01

    Full Text Available With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach.

  15. Demand Response in Smart Grids

    DEFF Research Database (Denmark)

    Hansen, Jacob; Knudsen, Jesper Viese; Annaswamy, Anuradha M.

    2014-01-01

    In recent decades, moves toward higher integration of Renewable Energy Resources have called for fundamental changes in both the planning and operation of the overall power grid. One such change is the incorporation of Demand Response (DR), the process by which consumers can adjust their demand...

  16. Grid Architecture 2

    Energy Technology Data Exchange (ETDEWEB)

    Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  17. A new automatic synthetic aperture radar-based flood mapping application hosted on the European Space Agency's Grid Processing of Demand Fast Access to Imagery environment

    Science.gov (United States)

    Matgen, Patrick; Giustarini, Laura; Hostache, Renaud

    2012-10-01

    This paper introduces an automatic flood mapping application that is hosted on the Grid Processing on Demand (GPOD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver operationally flooded areas using both recent and historical acquisitions of SAR data. Having as a short-term target the flooding-related exploitation of data generated by the upcoming ESA SENTINEL-1 SAR mission, the flood mapping application consists of two building blocks: i) a set of query tools for selecting the "crisis image" and the optimal corresponding "reference image" from the G-POD archive and ii) an algorithm for extracting flooded areas via change detection using the previously selected "crisis image" and "reference image". Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate reference image. Potential users will also be able to apply the implemented flood delineation algorithm. The latter combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. Both algorithms are computationally efficient and operate with minimum data requirements. The case study of the high magnitude flooding event that occurred in July 2007 on the Severn River, UK, and that was observed with a moderateresolution SAR sensor as well as airborne photography highlights the performance of the proposed online application. The flood mapping application on G-POD can be used sporadically, i.e. whenever a major flood event occurs and there is a demand for SAR-based flood extent maps. In the long term, a potential extension of the application could consist in systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis.

  18. Smart grid technologies in local electric grids

    Science.gov (United States)

    Lezhniuk, Petro D.; Pijarski, Paweł; Buslavets, Olga A.

    2017-08-01

    The research is devoted to the creation of favorable conditions for the integration of renewable sources of energy into electric grids, which were designed to be supplied from centralized generation at large electric power stations. Development of distributed generation in electric grids influences the conditions of their operation - conflict of interests arises. The possibility of optimal functioning of electric grids and renewable sources of energy, when complex criterion of the optimality is balance reliability of electric energy in local electric system and minimum losses of electric energy in it. Multilevel automated system for power flows control in electric grids by means of change of distributed generation of power is developed. Optimization of power flows is performed by local systems of automatic control of small hydropower stations and, if possible, solar power plants.

  19. SNP Arrays

    Directory of Open Access Journals (Sweden)

    Jari Louhelainen

    2016-10-01

    Full Text Available The papers published in this Special Issue “SNP arrays” (Single Nucleotide Polymorphism Arrays focus on several perspectives associated with arrays of this type. The range of papers vary from a case report to reviews, thereby targeting wider audiences working in this field. The research focus of SNP arrays is often human cancers but this Issue expands that focus to include areas such as rare conditions, animal breeding and bioinformatics tools. Given the limited scope, the spectrum of papers is nothing short of remarkable and even from a technical point of view these papers will contribute to the field at a general level. Three of the papers published in this Special Issue focus on the use of various SNP array approaches in the analysis of three different cancer types. Two of the papers concentrate on two very different rare conditions, applying the SNP arrays slightly differently. Finally, two other papers evaluate the use of the SNP arrays in the context of genetic analysis of livestock. The findings reported in these papers help to close gaps in the current literature and also to give guidelines for future applications of SNP arrays.

  20. Characterization of Slosh Damping for Ortho-Grid and Iso-Grid Internal Tank Structures

    Science.gov (United States)

    Westra, Douglas G.; Sansone, Marco D.; Eberhart, Chad J.; West, Jeffrey S.

    2016-01-01

    Grid stiffened tank structures such as Ortho-Grid and Iso-Grid are widely used in cryogenic tanks for providing stiffening to the tank while reducing mass, compared to tank walls of constant cross-section. If the structure is internal to the tank, it will positively affect the fluid dynamic behavior of the liquid propellant, in regard to fluid slosh damping. As NASA and commercial companies endeavor to explore the solar system, vehicles will by necessity become more mass efficient, and design margin will be reduced where possible. Therefore, if the damping characteristics of the Ortho-Grid and Iso-Grid structure is understood, their positive damping effect can be taken into account in the systems design process. Historically, damping by internal structures has been characterized by rules of thumb and for Ortho-Grid, empirical design tools intended for slosh baffles of much larger cross-section have been used. There is little or no information available to characterize the slosh behavior of Iso-Grid internal structure. Therefore, to take advantage of these structures for their positive damping effects, there is much need for obtaining additional data and tools to characterize them. Recently, the NASA Marshall Space Flight Center conducted both sub-scale testing and computational fluid dynamics (CFD) simulations of slosh damping for Ortho-Grid and Iso-Grid tanks for cylindrical tanks containing water. Enhanced grid meshing techniques were applied to the geometrically detailed and complex Ortho-Grid and Iso-Grid structures. The Loci-STREAM CFD program with the Volume of Fluid Method module for tracking and locating the water-air fluid interface was used to conduct the simulations. The CFD simulations were validated with the test data and new empirical models for predicting damping and frequency of Ortho-Grid and Iso-Grid structures were generated.

  1. A new automatic SAR-based flood mapping application hosted on the European Space Agency's grid processing on demand fast access to imagery environment

    Science.gov (United States)

    Hostache, Renaud; Chini, Marco; Matgen, Patrick; Giustarini, Laura

    2013-04-01

    There is a clear need for developing innovative processing chains based on earth observation (EO) data to generate products supporting emergency response and flood management at a global scale. Here an automatic flood mapping application is introduced. The latter is currently hosted on the Grid Processing on Demand (G-POD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver flooded areas using both recent and historical acquisitions of SAR data in an operational framework. It is worth mentioning that the method can be applied to both medium and high resolution SAR images. The flood mapping application consists of two main blocks: 1) A set of query tools for selecting the "crisis image" and the optimal corresponding pre-flood "reference image" from the G-POD archive. 2) An algorithm for extracting flooded areas using the previously selected "crisis image" and "reference image". The proposed method is a hybrid methodology, which combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. The method is based on the calibration of a statistical distribution of "open water" backscatter values inferred from SAR images of floods. Change detection with respect to a pre-flood reference image helps reducing over-detection of inundated areas. The algorithms are computationally efficient and operate with minimum data requirements, considering as input data a flood image and a reference image. Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate pre-flood reference image. Potential users will also be able to apply the implemented flood delineation algorithm. Case studies of several recent high magnitude flooding events (e.g. July 2007 Severn River flood

  2. Smart grid security

    Energy Technology Data Exchange (ETDEWEB)

    Cuellar, Jorge (ed.) [Siemens AG, Muenchen (Germany). Corporate Technology

    2013-11-01

    The engineering, deployment and security of the future smart grid will be an enormous project requiring the consensus of many stakeholders with different views on the security and privacy requirements, not to mention methods and solutions. The fragmentation of research agendas and proposed approaches or solutions for securing the future smart grid becomes apparent observing the results from different projects, standards, committees, etc, in different countries. The different approaches and views of the papers in this collection also witness this fragmentation. This book contains the following papers: 1. IT Security Architecture Approaches for Smart Metering and Smart Grid. 2. Smart Grid Information Exchange - Securing the Smart Grid from the Ground. 3. A Tool Set for the Evaluation of Security and Reliability in Smart Grids. 4. A Holistic View of Security and Privacy Issues in Smart Grids. 5. Hardware Security for Device Authentication in the Smart Grid. 6. Maintaining Privacy in Data Rich Demand Response Applications. 7. Data Protection in a Cloud-Enabled Smart Grid. 8. Formal Analysis of a Privacy-Preserving Billing Protocol. 9. Privacy in Smart Metering Ecosystems. 10. Energy rate at home Leveraging ZigBee to Enable Smart Grid in Residential Environment.

  3. electrode array

    African Journals Online (AJOL)

    PROF EKWUEME

    A geoelectric investigation employing vertical electrical soundings (VES) using the Ajayi - Makinde Two-Electrode array and the ... arrangements used in electrical D.C. resistivity survey. These include ..... Refraction Tomography to Study the.

  4. Fully transparent conformal organic thin-film transistor array and its application as LED front driving.

    Science.gov (United States)

    Cui, Nan; Ren, Hang; Tang, Qingxin; Zhao, Xiaoli; Tong, Yanhong; Hu, Wenping; Liu, Yichun

    2018-02-22

    A fully transparent conformal organic thin-film field-effect transistor array is demonstrated based on a photolithography-compatible ultrathin metallic grid gate electrode and a solution-processed C 8 -BTBT film. The resulting organic field-effect transistor array exhibits a high optical transparency of >80% over the visible spectrum, mobility up to 2 cm 2 V -1 s -1 , on/off ratio of 10 5 -10 6 , switching current of >0.1 mA, and excellent light stability. The transparent conformal transistor array is demonstrated to adhere well to flat and curved LEDs as front driving. These results present promising applications of the solution-processed wide-bandgap organic semiconductor thin films in future large-scale transparent conformal active-matrix displays.

  5. Low-cost Solar Array Project. Feasibility of the Silane Process for Producing Semiconductor-grade Silicon

    Science.gov (United States)

    1979-01-01

    The feasibility of Union Carbide's silane process for commercial application was established. An integrated process design for an experimental process system development unit and a commercial facility were developed. The corresponding commercial plant economic performance was then estimated.

  6. Ian Bird, head of Grid development at CERN

    CERN Multimedia

    Patrice Loïez

    2003-01-01

    "The Grid enables us to harness the power of scientific computing centres wherever they may be to provide the most powerful computing resource the world has to offer," said Ian Bird, head of Grid development at CERN. The Grid is a new method of sharing processing power between computers in centres around the world.

  7. Economics of Wind Power when National Grids are Unreliable

    NARCIS (Netherlands)

    Kooten, van G.C.; Wong, L.

    2010-01-01

    Power interruptions are a typical characteristic of national grids in developing countries. Manufacturing, processing, refrigeration and other facilities that require a dependable supply of power, and might be considered a small grid within the larger national grid, employ diesel generators for

  8. LHC computing grid

    International Nuclear Information System (INIS)

    Novaes, Sergio

    2011-01-01

    Full text: We give an overview of the grid computing initiatives in the Americas. High-Energy Physics has played a very important role in the development of grid computing in the world and in Latin America it has not been different. Lately, the grid concept has expanded its reach across all branches of e-Science, and we have witnessed the birth of the first nationwide infrastructures and its use in the private sector. (author)

  9. Urban micro-grids

    International Nuclear Information System (INIS)

    Faure, Maeva; Salmon, Martin; El Fadili, Safae; Payen, Luc; Kerlero, Guillaume; Banner, Arnaud; Ehinger, Andreas; Illouz, Sebastien; Picot, Roland; Jolivet, Veronique; Michon Savarit, Jeanne; Strang, Karl Axel

    2017-02-01

    ENEA Consulting published the results of a study on urban micro-grids conducted in partnership with the Group ADP, the Group Caisse des Depots, ENEDIS, Omexom, Total and the Tuck Foundation. This study offers a vision of the definition of an urban micro-grid, the value brought by a micro-grid in different contexts based on real case studies, and the upcoming challenges that micro-grid stakeholders will face (regulation, business models, technology). The electric production and distribution system, as the backbone of an increasingly urbanized and energy dependent society, is urged to shift towards a more resilient, efficient and environment-friendly infrastructure. Decentralisation of electricity production into densely populated areas is a promising opportunity to achieve this transition. A micro-grid enhances local production through clustering electricity producers and consumers within a delimited electricity network; it has the ability to disconnect from the main grid for a limited period of time, offering an energy security service to its customers during grid outages for example. However: The islanding capability is an inherent feature of the micro-grid concept that leads to a significant premium on electricity cost, especially in a system highly reliant on intermittent electricity production. In this case, a smart grid, with local energy production and no islanding capability, can be customized to meet relevant sustainability and cost savings goals at lower costs For industrials, urban micro-grids can be economically profitable in presence of high share of reliable energy production and thermal energy demand micro-grids face strong regulatory challenges that should be overcome for further development Whether islanding is or is not implemented into the system, end-user demand for a greener, more local, cheaper and more reliable energy, as well as additional services to the grid, are strong drivers for local production and consumption. In some specific cases

  10. High density grids

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Aina E.; Baxter, Elizabeth L.

    2018-01-16

    An X-ray data collection grid device is provided that includes a magnetic base that is compatible with robotic sample mounting systems used at synchrotron beamlines, a grid element fixedly attached to the magnetic base, where the grid element includes at least one sealable sample window disposed through a planar synchrotron-compatible material, where the planar synchrotron-compatible material includes at least one automated X-ray positioning and fluid handling robot fiducial mark.

  11. Micro grids toward the smart grid

    International Nuclear Information System (INIS)

    Guerrero, J.

    2011-01-01

    Worldwide electrical grids are expecting to become smarter in the near future, with interest in Microgrids likely to grow. A microgrid can be defined as a part of the grid with elements of prime energy movers, power electronics converters, distributed energy storage systems and local loads, that can operate autonomously but also interacting with main grid. Thus, the ability of intelligent Microgrids to operate in island mode or connected to the grid will be a keypoint to cope with new functionalities and the integration of renewable energy resources. The functionalities expected for these small grids are: black start operation, frequency and voltage stability, active and reactive power flow control, active power filter capabilities, and storage energy management. In this presentation, a review of the main concepts related to flexible Microgrids will be introduced, with examples of real Microgrids. AC and DC Microgrids to integrate renewable and distributed energy resources will also be presented, as well as distributed energy storage systems, and standardization issues of these Microgrids. Finally, Microgrid hierarchical control will be analyzed looking at three different levels: i) a primary control based on the droop method, including an output impedance virtual loop; ii) a secondary control, which enables restoring any deviations produced by the primary control; and iii) a tertiary control to manage the power flow between the microgrid and the external electrical distribution system.

  12. Process development for automated solar cell and module production. Task 4. Automated array assembly. Quarterly report No. 1

    Energy Technology Data Exchange (ETDEWEB)

    Hagerty, J. J.

    1980-10-15

    Work has been divided into five phases. The first phase is to modify existing hardware and controlling computer software to: (1) improve cell-to-cell placement accuracy, (2) improve the solder joint while reducing the amount of solder and flux smear on the cell's surface, and (3) reduce the system cycle time to 10 seconds. The second phase involves expanding the existing system's capabilities to be able to reject broken cells and make post-solder electrical tests. Phase 3 involves developing new hardware to allow for the automated encapsulation of solar modules. This involves three discrete pieces of hardware: (1) a vacuum platen end effector for the robot which allows it to pick up the 1' x 4' array of 35 inter-connected cells. With this, it can also pick up the cover glass and completed module, (2) a lamination preparation station which cuts the various encapsulation components from roll storage and positions them for encapsulation, and (3) an automated encapsulation chamber which interfaces with the above two and applies the heat and vacuum to cure the encapsulants. Phase 4 involves the final assembly of the encapsulated array into a framed, edge-sealed module completed for installation. For this we are using MBA's Glass Reinforced Concrete (GRC) in panels such as those developed by MBA for JPL under contract No. 955281. The GRC panel plays the multiple role of edge frame, substrate and mounting structure. An automated method of applying the edge seal will also be developed. The final phase (5) is the fabrication of six 1' x 4' electrically active solar modules using the above developed equipment. Progress is reported. (WHK)

  13. Determination of Rayleigh wave ellipticity using single-station and array-based processing of ambient seismic noise

    Science.gov (United States)

    Workman, Eli Joseph

    We present a single-station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multicomponent, multistation noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 sec the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 sec, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher ( 2%) and significantly higher (>20%), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e., Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves.

  14. Modeling and Grid Generation of Iced Airfoils

    Science.gov (United States)

    Vickerman, Mary B.; Baez, Marivell; Braun, Donald C.; Hackenberg, Anthony W.; Pennline, James A.; Schilling, Herbert W.

    2007-01-01

    SmaggIce Version 2.0 is a software toolkit for geometric modeling and grid generation for two-dimensional, singleand multi-element, clean and iced airfoils. A previous version of SmaggIce was described in Preparing and Analyzing Iced Airfoils, NASA Tech Briefs, Vol. 28, No. 8 (August 2004), page 32. To recapitulate: Ice shapes make it difficult to generate quality grids around airfoils, yet these grids are essential for predicting ice-induced complex flow. This software efficiently creates high-quality structured grids with tools that are uniquely tailored for various ice shapes. SmaggIce Version 2.0 significantly enhances the previous version primarily by adding the capability to generate grids for multi-element airfoils. This version of the software is an important step in streamlining the aeronautical analysis of ice airfoils using computational fluid dynamics (CFD) tools. The user may prepare the ice shape, define the flow domain, decompose it into blocks, generate grids, modify/divide/merge blocks, and control grid density and smoothness. All these steps may be performed efficiently even for the difficult glaze and rime ice shapes. Providing the means to generate highly controlled grids near rough ice, the software includes the creation of a wrap-around block (called the "viscous sublayer block"), which is a thin, C-type block around the wake line and iced airfoil. For multi-element airfoils, the software makes use of grids that wrap around and fill in the areas between the viscous sub-layer blocks for all elements that make up the airfoil. A scripting feature records the history of interactive steps, which can be edited and replayed later to produce other grids. Using this version of SmaggIce, ice shape handling and grid generation can become a practical engineering process, rather than a laborious research effort.

  15. Macedonian transmission grid capability and development

    International Nuclear Information System (INIS)

    Naumoski, K.; Achkoska, E.; Paunoski, A.

    2015-01-01

    The main task of the transmission grid is to guarantee evacuation of electricity from production facilities and, at the same time, supply the electricity to all customers, in a secure, reliable and qualitative manner. During the last years, transmission grid goes through the period of fast and important development, as a result of implementation of renewable and new technologies and creation of internal European electricity market. Due to these reasons, capacity of the existing grid needs to be upgraded either with optimization of existing infrastructure or constructing the new transmission projects. Among the various solutions for strengthening the grid, the one with the minimal investment expenses for construction is selected. While planning the national transmission grid, MEPSO planners apply multi-scenarios analyses, in order to handle all uncertainties, particularly in the forecasts on loads, production and exchange of electricity, location and size of the new power plants, hydrological conditions, integration of renewable sources and the evolution of the electricity market. Visions for development of European transmission grid are also considered. Special attention in the development plan is paid to modelling of power systems in the region of South-Eastern Europe and covering a wider area of the regional transmission grid with simulations of various market transactions. Macedonian transmission grid is developed to satisfy all requirements for electricity production/supply and transits, irrespective which scenario will be realized on long-term basis. Transmission development plan gives the road map for grid evolution from short-term and mid-term period towards long-term horizons (15-20 years ahead). While creating long-term visions, a big challenge in front of transmission planners is implementation of NPP. The paper gives overview of the planning process of Macedonian transmission grid,comprising: definition of scenarios,planning methodology and assessment of

  16. Grid for Earth Science Applications

    Science.gov (United States)

    Petitdidier, Monique; Schwichtenberg, Horst

    2013-04-01

    The civil society at large has addressed to the Earth Science community many strong requirements related in particular to natural and industrial risks, climate changes, new energies. The main critical point is that on one hand the civil society and all public ask for certainties i.e. precise values with small error range as it concerns prediction at short, medium and long term in all domains; on the other hand Science can mainly answer only in terms of probability of occurrence. To improve the answer or/and decrease the uncertainties, (1) new observational networks have been deployed in order to have a better geographical coverage and more accurate measurements have been carried out in key locations and aboard satellites. Following the OECD recommendations on the openness of research and public sector data, more and more data are available for Academic organisation and SMEs; (2) New algorithms and methodologies have been developed to face the huge data processing and assimilation into simulations using new technologies and compute resources. Finally, our total knowledge about the complex Earth system is contained in models and measurements, how we put them together has to be managed cleverly. The technical challenge is to put together databases and computing resources to answer the ES challenges. However all the applications are very intensive computing. Different compute solutions are available and depend on the characteristics of the applications. One of them is Grid especially efficient for independent or embarrassingly parallel jobs related to statistical and parametric studies. Numerous applications in atmospheric chemistry, meteorology, seismology, hydrology, pollution, climate and biodiversity have been deployed successfully on Grid. In order to fulfill requirements of risk management, several prototype applications have been deployed using OGC (Open geospatial Consortium) components with Grid middleware. The Grid has permitted via a huge number of runs to

  17. Analysis of turbine-grid interaction of grid-connected wind turbine using HHT

    Science.gov (United States)

    Chen, A.; Wu, W.; Miao, J.; Xie, D.

    2018-05-01

    This paper processes the output power of the grid-connected wind turbine with the denoising and extracting method based on Hilbert Huang transform (HHT) to discuss the turbine-grid interaction. At first, the detailed Empirical Mode Decomposition (EMD) and the Hilbert Transform (HT) are introduced. Then, on the premise of decomposing the output power of the grid-connected wind turbine into a series of Intrinsic Mode Functions (IMFs), energy ratio and power volatility are calculated to detect the unessential components. Meanwhile, combined with vibration function of turbine-grid interaction, data fitting of instantaneous amplitude and phase of each IMF is implemented to extract characteristic parameters of different interactions. Finally, utilizing measured data of actual parallel-operated wind turbines in China, this work accurately obtains the characteristic parameters of turbine-grid interaction of grid-connected wind turbine.

  18. Multigrid on unstructured grids using an auxiliary set of structured grids

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, C.C.; Malhotra, S.; Schultz, M.H. [Yale Univ., New Haven, CT (United States)

    1996-12-31

    Unstructured grids do not have a convenient and natural multigrid framework for actually computing and maintaining a high floating point rate on standard computers. In fact, just the coarsening process is expensive for many applications. Since unstructured grids play a vital role in many scientific computing applications, many modifications have been proposed to solve this problem. One suggested solution is to map the original unstructured grid onto a structured grid. This can be used as a fine grid in a standard multigrid algorithm to precondition the original problem on the unstructured grid. We show that unless extreme care is taken, this mapping can lead to a system with a high condition number which eliminates the usefulness of the multigrid method. Theorems with lower and upper bounds are provided. Simple examples show that the upper bounds are sharp.

  19. Replica consistency in a Data Grid

    International Nuclear Information System (INIS)

    Domenici, Andrea; Donno, Flavia; Pucciani, Gianni; Stockinger, Heinz; Stockinger, Kurt

    2004-01-01

    A Data Grid is a wide area computing infrastructure that employs Grid technologies to provide storage capacity and processing power to applications that handle very large quantities of data. Data Grids rely on data replication to achieve better performance and reliability by storing copies of data sets on different Grid nodes. When a data set can be modified by applications, the problem of maintaining consistency among existing copies arises. The consistency problem also concerns metadata, i.e., additional information about application data sets such as indices, directories, or catalogues. This kind of metadata is used both by the applications and by the Grid middleware to manage the data. For instance, the Replica Management Service (the Grid middleware component that controls data replication) uses catalogues to find the replicas of each data set. Such catalogues can also be replicated and their consistency is crucial to the correct operation of the Grid. Therefore, metadata consistency generally poses stricter requirements than data consistency. In this paper we report on the development of a Replica Consistency Service based on the middleware mainly developed by the European Data Grid Project. The paper summarises the main issues in the replica consistency problem, and lays out a high-level architectural design for a Replica Consistency Service. Finally, results from simulations of different consistency models are presented

  20. Fabrication of micro-dot arrays and micro-walls of acrylic acid/melamine resin on aluminum by AFM probe processing and electrophoretic coating

    International Nuclear Information System (INIS)

    Kurokawa, S.; Kikuchi, T.; Sakairi, M.; Takahashi, H.

    2008-01-01

    Micro-dot arrays and micro-walls of acrylic acid/melamine resin were fabricated on aluminum by anodizing, atomic force microscope (AFM) probe processing, and electrophoretic deposition. Barrier type anodic oxide films of 15 nm thickness were formed on aluminum and then the specimen was scratched with an AFM probe in a solution containing acrylic acid/melamine resin nano-particles to remove the anodic oxide film locally. After scratching, the specimen was anodically polarized to deposit acrylic acid/melamine resin electrophoretically at the film-removed area. The resin deposited on the specimen was finally cured by heating. It was found that scratching with the AFM probe on open circuit leads to the contamination of the probe with resin, due to positive shifts in the potential during scratching. Scratching of the specimen under potentiostatic conditions at -1.0 V, however, resulted in successful resin deposition at the film-removed area without probe contamination. The rate of resin deposition increased as the specimen potential becomes more positive during electrophoretic deposition. Arrays of resin dots with a few to several tens μm diameter and 100-1000 nm height, and resin walls with 100-1000 nm height and 1 μm width were obtained on specimens by successive anodizing, probe processing, and electrophoretic deposition

  1. Fabrication of micro-dot arrays and micro-walls of acrylic acid/melamine resin on aluminum by AFM probe processing and electrophoretic coating

    Energy Technology Data Exchange (ETDEWEB)

    Kurokawa, S.; Kikuchi, T.; Sakairi, M. [Graduate School of Engineering, Hokkaido University, N-13, W-8, Kita-Ku, Sapporo 060-8628 (Japan); Takahashi, H. [Graduate School of Engineering, Hokkaido University, N-13, W-8, Kita-Ku, Sapporo 060-8628 (Japan)], E-mail: takahasi@elechem1-mc.eng.hokudai.ac.jp

    2008-11-30

    Micro-dot arrays and micro-walls of acrylic acid/melamine resin were fabricated on aluminum by anodizing, atomic force microscope (AFM) probe processing, and electrophoretic deposition. Barrier type anodic oxide films of 15 nm thickness were formed on aluminum and then the specimen was scratched with an AFM probe in a solution containing acrylic acid/melamine resin nano-particles to remove the anodic oxide film locally. After scratching, the specimen was anodically polarized to deposit acrylic acid/melamine resin electrophoretically at the film-removed area. The resin deposited on the specimen was finally cured by heating. It was found that scratching with the AFM probe on open circuit leads to the contamination of the probe with resin, due to positive shifts in the potential during scratching. Scratching of the specimen under potentiostatic conditions at -1.0 V, however, resulted in successful resin deposition at the film-removed area without probe contamination. The rate of resin deposition increased as the specimen potential becomes more positive during electrophoretic deposition. Arrays of resin dots with a few to several tens {mu}m diameter and 100-1000 nm height, and resin walls with 100-1000 nm height and 1 {mu}m width were obtained on specimens by successive anodizing, probe processing, and electrophoretic deposition.

  2. Interoperability and HealthGRID.

    Science.gov (United States)

    Bescos, C; Schmitt, D; Kass, J; García-Barbero, M; Kantchev, P

    2005-01-01

    GRID technology, with initiatives like the GGF, will have the potential to allow both competition and interoperability not only among applications and toolkits, but also among implementations of key services. The pyramid of eHealth interoperability should be achieved from standards in communication and data security, storage and processing, to the policy initiatives, including organizational protocols, financing procedures, and legal framework. The open challenges for GRID use in clinical fields illustrate the potential of the combination of grid technologies with medical routine into a wider interoperable framework. The Telemedicine Alliance is a consortium (ESA, WHO and ITU), initiated in 2002, in building a vision for the provision of eHealth to European citizens by 2010. After a survey with more that 50 interviews of experts, interoperability was identified as the main showstopper to eHealth implementation. There are already several groups and organizations contributing to standardization. TM-Alliance is supporting the "e-Health Standardization Coordination Group" (eHSCG). It is now, in the design and development phase of GRID technology in Health, the right moment to act with the aim of achieving an interoperable and open framework. The Health area should benefit from the initiatives started at the GGF in terms of global architecture and services definitions, as well as from the security and other web services applications developed under the Internet umbrella. There is a risk that existing important results of the standardization efforts in this area are not taken up simply because they are not always known.

  3. Filter arrays

    Science.gov (United States)

    Page, Ralph H.; Doty, Patrick F.

    2017-08-01

    The various technologies presented herein relate to a tiled filter array that can be used in connection with performance of spatial sampling of optical signals. The filter array comprises filter tiles, wherein a first plurality of filter tiles are formed from a first material, the first material being configured such that only photons having wavelengths in a first wavelength band pass therethrough. A second plurality of filter tiles is formed from a second material, the second material being configured such that only photons having wavelengths in a second wavelength band pass therethrough. The first plurality of filter tiles and the second plurality of filter tiles can be interspersed to form the filter array comprising an alternating arrangement of first filter tiles and second filter tiles.

  4. Concomitant GRID boost for Gamma Knife radiosurgery

    International Nuclear Information System (INIS)

    Ma Lijun; Kwok, Young; Chin, Lawrence S.; Simard, J. Marc; Regine, William F.

    2005-01-01

    We developed an integrated GRID boost technique for Gamma Knife radiosurgery. The technique generates an array of high dose spots within the target volume via a grid of 4-mm shots. These high dose areas were placed over a conventional Gamma Knife plan where a peripheral dose covers the full target volume. The beam weights of the 4-mm shots were optimized iteratively to maximize the integral dose inside the target volume. To investigate the target volume coverage and the dose to the adjacent normal brain tissue for the technique, we compared the GRID boosted treatment plans with conventional Gamma Knife treatment plans using physical and biological indices such as dose-volume histogram (DVH), DVH-derived indices, equivalent uniform dose (EUD), tumor control probabilities (TCP), and normal tissue complication probabilities (NTCP). We found significant increase in the target volume indices such as mean dose (5%-34%; average 14%), TCP (4%-45%; average 21%), and EUD (2%-22%; average 11%) for the GRID boost technique. No significant change in the peripheral dose coverage for the target volume was found per RTOG protocol. In addition, the EUD and the NTCP for the normal brain adjacent to the target (i.e., the near region) were decreased for the GRID boost technique. In conclusion, we demonstrated a new technique for Gamma Knife radiosurgery that can escalate the dose to the target while sparing the adjacent normal brain tissue

  5. Migration of Monte Carlo simulation of high energy atmospheric showers to GRID infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez, Adolfo; Contreras, Jose Luis [Grupo de Altas EnergIas Departamento de Fisica Atomica, Molecular y Nuclear Universidad Complutense de Madrid Avenida Complutense s/n, 28040 Madrid - Spain (Spain); Calle, Ignacio de la; Ibarra, Aitor; Tapiador, Daniel, E-mail: avazquez@gae.ucm.e [INSA. IngenierIa y Servicios Aeroespaciales S.A. Paseo Pintor Rosales 34, 28008 Madrid - Spain (Spain)

    2010-04-01

    A system to run Monte Carlo simulations on a Grid environment is presented. The architectural design proposed uses the current resources of the MAGIC Virtual Organization on EGEE and can be easily generalized to support the simulation of any similar experiment, such as that of the future European planned project, the Cherenkov Telescope Array. The proposed system is based on a Client/Server architecture, and provides the user with a single access point to the simulation environment through a remote graphical user interface, the Client. The Client can be accessed via web browser, using web service technology, with no additional software installation on the user side required. The Server processes the user request and uses a database for both data catalogue and job management inside the Grid. The design, first production tests and lessons learned from the system will be discussed here.

  6. Migration of Monte Carlo simulation of high energy atmospheric showers to GRID infrastructure

    International Nuclear Information System (INIS)

    Vazquez, Adolfo; Contreras, Jose Luis; Calle, Ignacio de la; Ibarra, Aitor; Tapiador, Daniel

    2010-01-01

    A system to run Monte Carlo simulations on a Grid environment is presented. The architectural design proposed uses the current resources of the MAGIC Virtual Organization on EGEE and can be easily generalized to support the simulation of any similar experiment, such as that of the future European planned project, the Cherenkov Telescope Array. The proposed system is based on a Client/Server architecture, and provides the user with a single access point to the simulation environment through a remote graphical user interface, the Client. The Client can be accessed via web browser, using web service technology, with no additional software installation on the user side required. The Server processes the user request and uses a database for both data catalogue and job management inside the Grid. The design, first production tests and lessons learned from the system will be discussed here.

  7. GridOrbit public display

    DEFF Research Database (Denmark)

    Ramos, Juan David Hincapie; Tabard, Aurélien; Bardram, Jakob

    2010-01-01

    We introduce GridOrbit, a public awareness display that visualizes the activity of a community grid used in a biology laboratory. This community grid executes bioin-formatics algorithms and relies on users to donate CPU cycles to the grid. The goal of GridOrbit is to create a shared awareness about...

  8. Techniques for grid manipulation and adaptation. [computational fluid dynamics

    Science.gov (United States)

    Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.

    1992-01-01

    Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.

  9. The Smart Grid Impact on the Danish DSOs’ Business Model

    DEFF Research Database (Denmark)

    Ma, Zheng; Sommer, Simon; Jørgensen, Bo Nørregaard

    2016-01-01

    The transformation progress of the smart grid challenges the market players' business models. One of those market players is the Distribution System Operators (DSOs). This paper aims to elaborate how smart grid influences the DSOs' business models with case studies of two Danish DSOs — Energi......Fyn and TREFOR. The main findings indicate that the Danish smart grid transformation process influences the Danish DSOs' business models via four smart grid related factors: (1) smart meters, (2) Distributed Energy Resources (DERs), (3) Bidirectional electricity flow, and (4) R&D. Therefore, the results show...... that the smart grid incrementally not revolutionary influences the Danish DSOs' business models, and the smart grid transformation of the Danish electricity grid is slower than the agenda of the official Danish smart grid development strategy....

  10. The LHCb Grid Simulation

    CERN Multimedia

    Baranov, Alexander

    2016-01-01

    The LHCb Grid access if based on the LHCbDirac system. It provides access to data and computational resources to researchers with different geographical locations. The Grid has a hierarchical topology with multiple sites distributed over the world. The sites differ from each other by their number of CPUs, amount of disk storage and connection bandwidth. These parameters are essential for the Grid work. Moreover, job scheduling and data distribution strategy have a great impact on the grid performance. However, it is hard to choose an appropriate algorithm and strategies as they need a lot of time to be tested on the real grid. In this study, we describe the LHCb Grid simulator. The simulator reproduces the LHCb Grid structure with its sites and their number of CPUs, amount of disk storage and bandwidth connection. We demonstrate how well the simulator reproduces the grid work, show its advantages and limitations. We show how well the simulator reproduces job scheduling and network anomalies, consider methods ...

  11. The play grid

    DEFF Research Database (Denmark)

    Fogh, Rune; Johansen, Asger

    2013-01-01

    In this paper we propose The Play Grid, a model for systemizing different play types. The approach is psychological by nature and the actual Play Grid is based, therefore, on two pairs of fundamental and widely acknowledged distinguishing characteristics of the ego, namely: extraversion vs. intro...

  12. Planning in Smart Grids

    NARCIS (Netherlands)

    Bosman, M.G.C.

    2012-01-01

    The electricity supply chain is changing, due to increasing awareness for sustainability and an improved energy efficiency. The traditional infrastructure where demand is supplied by centralized generation is subject to a transition towards a Smart Grid. In this Smart Grid, sustainable generation

  13. Gridded Species Distribution, Version 1: Global Amphibians Presence Grids

    Data.gov (United States)

    National Aeronautics and Space Administration — The Global Amphibians Presence Grids of the Gridded Species Distribution, Version 1 is a reclassified version of the original grids of amphibian species distribution...

  14. Functional response of osteoblasts in functionally gradient titanium alloy mesh arrays processed by 3D additive manufacturing.

    Science.gov (United States)

    Nune, K C; Kumar, A; Misra, R D K; Li, S J; Hao, Y L; Yang, R

    2017-02-01

    We elucidate here the osteoblasts functions and cellular activity in 3D printed interconnected porous architecture of functionally gradient Ti-6Al-4V alloy mesh structures in terms of cell proliferation and growth, distribution of cell nuclei, synthesis of proteins (actin, vinculin, and fibronectin), and calcium deposition. Cell culture studies with pre-osteoblasts indicated that the interconnected porous architecture of functionally gradient mesh arrays was conducive to osteoblast functions. However, there were statistically significant differences in the cellular response depending on the pore size in the functionally gradient structure. The interconnected porous architecture contributed to the distribution of cells from the large pore size (G1) to the small pore size (G3), with consequent synthesis of extracellular matrix and calcium precipitation. The gradient mesh structure significantly impacted cell adhesion and influenced the proliferation stage, such that there was high distribution of cells on struts of the gradient mesh structure. Actin and vinculin showed a significant difference in normalized expression level of protein per cell, which was absent in the case of fibronectin. Osteoblasts present on mesh struts formed a confluent sheet, bridging the pores through numerous cytoplasmic extensions. The gradient mesh structure fabricated by electron beam melting was explored to obtain fundamental insights on cellular activity with respect to osteoblast functions. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Supercritical CO2 extraction of candlenut oil: process optimization using Taguchi orthogonal array and physicochemical properties of the oil.

    Science.gov (United States)

    Subroto, Erna; Widjojokusumo, Edward; Veriansyah, Bambang; Tjandrawinata, Raymond R

    2017-04-01

    A series of experiments was conducted to determine optimum conditions for supercritical carbon dioxide extraction of candlenut oil. A Taguchi experimental design with L 9 orthogonal array (four factors in three levels) was employed to evaluate the effects of pressure of 25-35 MPa, temperature of 40-60 °C, CO 2 flow rate of 10-20 g/min and particle size of 0.3-0.8 mm on oil solubility. The obtained results showed that increase in particle size, pressure and temperature improved the oil solubility. The supercritical carbon dioxide extraction at optimized parameters resulted in oil yield extraction of 61.4% at solubility of 9.6 g oil/kg CO 2 . The obtained candlenut oil from supercritical carbon dioxide extraction has better oil quality than oil which was extracted by Soxhlet extraction using n-hexane. The oil contains high unsaturated oil (linoleic acid and linolenic acid), which have many beneficial effects on human health.

  16. Global Inventory and Analysis of Smart Grid Demonstration Projects

    Energy Technology Data Exchange (ETDEWEB)

    Mulder, W.; Kumpavat, K.; Faasen, C.; Verheij, F.; Vaessen, P [DNV KEMA Energy and Sustainability, Arnhem (Netherlands)

    2012-10-15

    As the key enabler of a more sustainable, economical and reliable energy system, the development of smart grids has received a great deal of attention in recent times. In many countries around the world the benefits of such a system have begun to be investigated through a number of demonstration projects. With such a vast array of projects it can be difficult to keep track of changes, and to understand which best practices are currently available with regard to smart grids. This report aims to address these issues through providing a comprehensive outlook on the current status of smart grid projects worldwide.

  17. Grid generation methods

    CERN Document Server

    Liseikin, Vladimir D

    2017-01-01

    This new edition provides a description of current developments relating to grid methods, grid codes, and their applications to actual problems. Grid generation methods are indispensable for the numerical solution of differential equations. Adaptive grid-mapping techniques, in particular, are the main focus and represent a promising tool to deal with systems with singularities. This 3rd edition includes three new chapters on numerical implementations (10), control of grid properties (11), and applications to mechanical, fluid, and plasma related problems (13). Also the other chapters have been updated including new topics, such as curvatures of discrete surfaces (3). Concise descriptions of hybrid mesh generation, drag and sweeping methods, parallel algorithms for mesh generation have been included too. This new edition addresses a broad range of readers: students, researchers, and practitioners in applied mathematics, mechanics, engineering, physics and other areas of applications.

  18. Design optimization of grid-connected PV inverters

    DEFF Research Database (Denmark)

    Koutroulis, Eftichios; Blaabjerg, Frede

    2011-01-01

    The DC/AC inverters are the key elements in grid-connected PV energy production systems. In this paper, new design optimization techniques focused on transformerless (very high efficiency) PV inverters are proposed. They have been developed based on an analysis of the deficiencies of the current......, state-of-the-art PV inverters design technology, which limits the amount of PV energy supplied into the electric grid. The influences of the electric grid regulations and standards and the PV array operational characteristics on the design of grid-connected PV inverters have also been considered....... The simulation results verify that the proposed optimization techniques enable the maximization of the PV energy injected into the electric grid by the optimized PV installation....

  19. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    Science.gov (United States)

    Goldman, H.; Wolf, M.

    1979-01-01

    Analyses of slicing processes and junction formation processes are presented. A simple method for evaluation of the relative economic merits of competing process options with respect to the cost of energy produced by the system is described. An energy consumption analysis was developed and applied to determine the energy consumption in the solar module fabrication process sequence, from the mining of the SiO2 to shipping. The analysis shows that, in current technology practice, inordinate energy use in the purification step, and large wastage of the invested energy through losses, particularly poor conversion in slicing, as well as inadequate yields throughout. The cell process energy expenditures already show a downward trend based on increased throughput rates. The large improvement, however, depends on the introduction of a more efficient purification process and of acceptable ribbon growing techniques.

  20. Array Detector Modules for Spent Fuel Verification

    Energy Technology Data Exchange (ETDEWEB)

    Bolotnikov, Aleksey

    2018-05-07

    Brookhaven National Laboratory (BNL) proposes to evaluate the arrays of position-sensitive virtual Frisch-grid (VFG) detectors for passive gamma-ray emission tomography (ET) to verify the spent fuel in storage casks before storing them in geo-repositories. Our primary objective is to conduct a preliminary analysis of the arrays capabilities and to perform field measurements to validate the effectiveness of the proposed array modules. The outcome of this proposal will consist of baseline designs for the future ET system which can ultimately be used together with neutrons detectors. This will demonstrate the usage of this technology in spent fuel storage casks.

  1. Proposal for grid computing for nuclear applications

    International Nuclear Information System (INIS)

    Faridah Mohamad Idris; Wan Ahmad Tajuddin Wan Abdullah; Zainol Abidin Ibrahim; Zukhaimira Zolkapli

    2013-01-01

    Full-text: The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process. (author)

  2. Programmable cellular arrays. Faults testing and correcting in cellular arrays

    International Nuclear Information System (INIS)

    Cercel, L.

    1978-03-01

    A review of some recent researches about programmable cellular arrays in computing and digital processing of information systems is presented, and includes both combinational and sequential arrays, with full arbitrary behaviour, or which can realize better implementations of specialized blocks as: arithmetic units, counters, comparators, control systems, memory blocks, etc. Also, the paper presents applications of cellular arrays in microprogramming, in implementing of a specialized computer for matrix operations, in modeling of universal computing systems. The last section deals with problems of fault testing and correcting in cellular arrays. (author)

  3. Decentral Smart Grid Control

    Science.gov (United States)

    Schäfer, Benjamin; Matthiae, Moritz; Timme, Marc; Witthaut, Dirk

    2015-01-01

    Stable operation of complex flow and transportation networks requires balanced supply and demand. For the operation of electric power grids—due to their increasing fraction of renewable energy sources—a pressing challenge is to fit the fluctuations in decentralized supply to the distributed and temporally varying demands. To achieve this goal, common smart grid concepts suggest to collect consumer demand data, centrally evaluate them given current supply and send price information back to customers for them to decide about usage. Besides restrictions regarding cyber security, privacy protection and large required investments, it remains unclear how such central smart grid options guarantee overall stability. Here we propose a Decentral Smart Grid Control, where the price is directly linked to the local grid frequency at each customer. The grid frequency provides all necessary information about the current power balance such that it is sufficient to match supply and demand without the need for a centralized IT infrastructure. We analyze the performance and the dynamical stability of the power grid with such a control system. Our results suggest that the proposed Decentral Smart Grid Control is feasible independent of effective measurement delays, if frequencies are averaged over sufficiently large time intervals.

  4. Decentral Smart Grid Control

    International Nuclear Information System (INIS)

    Schäfer, Benjamin; Matthiae, Moritz; Timme, Marc; Witthaut, Dirk

    2015-01-01

    Stable operation of complex flow and transportation networks requires balanced supply and demand. For the operation of electric power grids—due to their increasing fraction of renewable energy sources—a pressing challenge is to fit the fluctuations in decentralized supply to the distributed and temporally varying demands. To achieve this goal, common smart grid concepts suggest to collect consumer demand data, centrally evaluate them given current supply and send price information back to customers for them to decide about usage. Besides restrictions regarding cyber security, privacy protection and large required investments, it remains unclear how such central smart grid options guarantee overall stability. Here we propose a Decentral Smart Grid Control, where the price is directly linked to the local grid frequency at each customer. The grid frequency provides all necessary information about the current power balance such that it is sufficient to match supply and demand without the need for a centralized IT infrastructure. We analyze the performance and the dynamical stability of the power grid with such a control system. Our results suggest that the proposed Decentral Smart Grid Control is feasible independent of effective measurement delays, if frequencies are averaged over sufficiently large time intervals. (paper)

  5. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    Science.gov (United States)

    Wolf, M.

    1981-01-01

    The effect of solar cell metallization pattern design on solar cell performance and the costs and performance effects of different metallization processes are discussed. Definitive design rules for the front metallization pattern for large area solar cells are presented. Chemical and physical deposition processes for metallization are described and compared. An economic evaluation of the 6 principal metallization options is presented. Instructions for preparing Format A cost data for solar cell manufacturing processes from UPPC forms for input into the SAMIC computer program are presented.

  6. Comprehensive Smart Grid Planning in a Regulated Utility Environment

    Science.gov (United States)

    Turner, Matthew; Liao, Yuan; Du, Yan

    2015-06-01

    This paper presents the tools and exercises used during the Kentucky Smart Grid Roadmap Initiative in a collaborative electric grid planning process involving state regulators, public utilities, academic institutions, and private interest groups. The mandate of the initiative was to assess the existing condition of smart grid deployments in Kentucky, to enhance understanding of smart grid concepts by stakeholders, and to develop a roadmap for the deployment of smart grid technologies by the jurisdictional utilities of Kentucky. Through involvement of many important stakeholder groups, the resultant Smart Grid Deployment Roadmap proposes an aggressive yet achievable strategy and timetable designed to promote enhanced availability, security, efficiency, reliability, affordability, sustainability and safety of the electricity supply throughout the state while maintaining Kentucky's nationally competitive electricity rates. The models and methods developed for this exercise can be utilized as a systematic process for the planning of coordinated smart grid deployments.

  7. Array Databases: Agile Analytics (not just) for the Earth Sciences

    Science.gov (United States)

    Baumann, P.; Misev, D.

    2015-12-01

    Gridded data, such as images, image timeseries, and climate datacubes, today are managed separately from the metadata, and with different, restricted retrieval capabilities. While databases are good at metadata modelled in tables, XML hierarchies, or RDF graphs, they traditionally do not support multi-dimensional arrays.This gap is being closed by Array Databases, pioneered by the scalable rasdaman ("raster data manager") array engine. Its declarative query language, rasql, extends SQL with array operators which are optimized and parallelized on server side. Installations can easily be mashed up securely, thereby enabling large-scale location-transparent query processing in federations. Domain experts value the integration with their commonly used tools leading to a quick learning curve.Earth, Space, and Life sciences, but also Social sciences as well as business have massive amounts of data and complex analysis challenges that are answered by rasdaman. As of today, rasdaman is mature and in operational use on hundreds of Terabytes of timeseries datacubes, with transparent query distribution across more than 1,000 nodes. Additionally, its concepts have shaped international Big Data standards in the field, including the forthcoming array extension to ISO SQL, many of which are supported by both open-source and commercial systems meantime. In the geo field, rasdaman is reference implementation for the Open Geospatial Consortium (OGC) Big Data standard, WCS, now also under adoption by ISO. Further, rasdaman is in the final stage of OSGeo incubation.In this contribution we present array queries a la rasdaman, describe the architecture and novel optimization and parallelization techniques introduced in 2015, and put this in context of the intercontinental EarthServer initiative which utilizes rasdaman for enabling agile analytics on Petascale datacubes.

  8. Low Cost Solar Array Project. Feasibility of the silane process for producing semiconductor-grade silicon. Final report, October 1975-March 1979

    Energy Technology Data Exchange (ETDEWEB)

    1979-06-01

    The commercial production of low-cost semiconductor-grade silicon is an essential requirement of the JPL/DOE (Department of Energy) Low-Cost Solar Array (LSA) Project. A 1000-metric-ton-per-year commercial facility using the Union Carbide Silane Process will produce molten silicon for an estimated price of $7.56/kg (1975 dollars, private financing), meeting the DOE goal of less than $10/kg. Conclusions and technology status are reported for both contract phases, which had the following objectives: (1) establish the feasibility of Union Carbide's Silane Process for commercial application, and (2) develop an integrated process design for an Experimental Process System Development Unit (EPSDU) and a commercial facility, and estimate the corresponding commercial plant economic performance. To assemble the facility design, the following work was performed: (a) collection of Union Carbide's applicable background technology; (b) design, assembly, and operation of a small integrated silane-producing Process Development Unit (PDU); (c) analysis, testing, and comparison of two high-temperature methods for converting pure silane to silicon metal; and (d) determination of chemical reaction equilibria and kinetics, and vapor-liquid equilibria for chlorosilanes.

  9. The open science grid

    International Nuclear Information System (INIS)

    Pordes, R.

    2004-01-01

    The U.S. LHC Tier-1 and Tier-2 laboratories and universities are developing production Grids to support LHC applications running across a worldwide Grid computing system. Together with partners in computer science, physics grid projects and active experiments, we will build a common national production grid infrastructure which is open in its architecture, implementation and use. The Open Science Grid (OSG) model builds upon the successful approach of last year's joint Grid2003 project. The Grid3 shared infrastructure has for over eight months provided significant computational resources and throughput to a range of applications, including ATLAS and CMS data challenges, SDSS, LIGO, and biology analyses, and computer science demonstrators and experiments. To move towards LHC-scale data management, access and analysis capabilities, we must increase the scale, services, and sustainability of the current infrastructure by an order of magnitude or more. Thus, we must achieve a significant upgrade in its functionalities and technologies. The initial OSG partners will build upon a fully usable, sustainable and robust grid. Initial partners include the US LHC collaborations, DOE and NSF Laboratories and Universities and Trillium Grid projects. The approach is to federate with other application communities in the U.S. to build a shared infrastructure open to other sciences and capable of being modified and improved to respond to needs of other applications, including CDF, D0, BaBar, and RHIC experiments. We describe the application-driven, engineered services of the OSG, short term plans and status, and the roadmap for a consortium, its partnerships and national focus

  10. Analysis and evaluation of process and equipment in tasks 2 and 4 of the Low Cost Solar Array project

    Science.gov (United States)

    Goldman, H.; Wolf, M.

    1978-01-01

    Several experimental and projected Czochralski crystal growing process methods were studied and compared to available operations and cost-data of recent production Cz-pulling, in order to elucidate the role of the dominant cost contributing factors. From this analysis, it becomes apparent that substantial cost reductions can be realized from technical advancements which fall into four categories: an increase in furnace productivity; the reduction of crucible cost through use of the crucible for the equivalent of multiple state-of-the-art crystals; the combined effect of several smaller technical improvements; and a carry over effect of the expected availability of semiconductor grade polysilicon at greatly reduced prices. A format for techno-economic analysis of solar cell production processes was developed, called the University of Pennsylvania Process Characterization (UPPC) format. The accumulated Cz process data are presented.

  11. Process research of non-cz silicon material. Low cost solar array project, cell and module formation research area

    Science.gov (United States)

    1982-01-01

    Liquid diffusion masks and liquid applied dopants to replace the CVD Silox masking and gaseous diffusion operations specified for forming junctions in the Westinghouse baseline process sequence for producing solar cells from dendritic web silicon were investigated.

  12. Physicists Get INSPIREd: INSPIRE Project and Grid Applications

    International Nuclear Information System (INIS)

    Klem, Jukka; Iwaszkiewicz, Jan

    2011-01-01

    INSPIRE is the new high-energy physics scientific information system developed by CERN, DESY, Fermilab and SLAC. INSPIRE combines the curated and trusted contents of SPIRES database with Invenio digital library technology. INSPIRE contains the entire HEP literature with about one million records and in addition to becoming the reference HEP scientific information platform, it aims to provide new kinds of data mining services and metrics to assess the impact of articles and authors. Grid and cloud computing provide new opportunities to offer better services in areas that require large CPU and storage resources including document Optical Character Recognition (OCR) processing, full-text indexing of articles and improved metrics. D4Science-II is a European project that develops and operates an e-Infrastructure supporting Virtual Research Environments (VREs). It develops an enabling technology (gCube) which implements a mechanism for facilitating the interoperation of its e-Infrastructure with other autonomously running data e-Infrastructures. As a result, this creates the core of an e-Infrastructure ecosystem. INSPIRE is one of the e-Infrastructures participating in D4Science-II project. In the context of the D4Science-II project, the INSPIRE e-Infrastructure makes available some of its resources and services to other members of the resulting ecosystem. Moreover, it benefits from the ecosystem via a dedicated Virtual Organization giving access to an array of resources ranging from computing and storage resources of grid infrastructures to data and services.

  13. Performance testing framework for smart grid communication network

    International Nuclear Information System (INIS)

    Quang, D N; See, O H; Chee, L L; Xuen, C Y; Karuppiah, S

    2013-01-01

    Smart grid communication network is comprised of different communication mediums and technologies. Performance evaluation is one of the main concerns in smart grid communication system. In any smart grid communication implementation, to determine the performance factor of the network, a testing of an end-to-end process flow is required. Therefore, an effective and coordinated testing procedure plays a crucial role in evaluating the performance of smart grid communications. In this paper, a testing framework is proposed as a guideline to analyze and assess the performance of smart grid communication network.

  14. Desktop grid computing

    CERN Document Server

    Cerin, Christophe

    2012-01-01

    Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance. The book's first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical

  15. Transmission grid security

    CERN Document Server

    Haarla, Liisa; Hirvonen, Ritva; Labeau, Pierre-Etienne

    2011-01-01

    In response to the growing importance of power system security and reliability, ""Transmission Grid Security"" proposes a systematic and probabilistic approach for transmission grid security analysis. The analysis presented uses probabilistic safety assessment (PSA) and takes into account the power system dynamics after severe faults. In the method shown in this book the power system states (stable, not stable, system breakdown, etc.) are connected with the substation reliability model. In this way it is possible to: estimate the system-wide consequences of grid faults; identify a chain of eve

  16. Direction-of-Arrival Estimation with Coarray ESPRIT for Coprime Array.

    Science.gov (United States)

    Zhou, Chengwei; Zhou, Jinfang

    2017-08-03

    A coprime array is capable of achieving more degrees-of-freedom for direction-of-arrival (DOA) estimation than a uniform linear array when utilizing the same number of sensors. However, existing algorithms exploiting coprime array usually adopt predefined spatial sampling grids for optimization problem design or include spectrum peak search process for DOA estimation, resulting in the contradiction between estimation performance and computational complexity. To address this problem, we introduce the Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) to the coprime coarray domain, and propose a novel coarray ESPRIT-based DOA estimation algorithm to efficiently retrieve the off-grid DOAs. Specifically, the coprime coarray statistics are derived according to the received signals from a coprime array to ensure the degrees-of-freedom (DOF) superiority, where a pair of shift invariant uniform linear subarrays is extracted. The rotational invariance of the signal subspaces corresponding to the underlying subarrays is then investigated based on the coprime coarray covariance matrix, and the incorporation of ESPRIT in the coarray domain makes it feasible to formulate the closed-form solution for DOA estimation. Theoretical analyses and simulation results verify the efficiency and the effectiveness of the proposed DOA estimation algorithm.

  17. FAULT TOLERANCE IN MOBILE GRID COMPUTING

    OpenAIRE

    Aghila Rajagopal; M.A. Maluk Mohamed

    2014-01-01

    This paper proposes a novel model for Surrogate Object based paradigm in mobile grid environment for achieving a Fault Tolerance. Basically Mobile Grid Computing Model focuses on Service Composition and Resource Sharing Process. In order to increase the performance of the system, Fault Recovery plays a vital role. In our Proposed System for Recovery point, Surrogate Object Based Checkpoint Recovery Model is introduced. This Checkpoint Recovery model depends on the Surrogate Object and the Fau...

  18. Trends in life science grid: from computing grid to knowledge grid

    Directory of Open Access Journals (Sweden)

    Konagaya Akihiko

    2006-12-01

    Full Text Available Abstract Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  19. Interactive volume visualization of general polyhedral grids

    KAUST Repository

    Muigg, Philipp

    2011-12-01

    This paper presents a novel framework for visualizing volumetric data specified on complex polyhedral grids, without the need to perform any kind of a priori tetrahedralization. These grids are composed of polyhedra that often are non-convex and have an arbitrary number of faces, where the faces can be non-planar with an arbitrary number of vertices. The importance of such grids in state-of-the-art simulation packages is increasing rapidly. We propose a very compact, face-based data structure for representing such meshes for visualization, called two-sided face sequence lists (TSFSL), as well as an algorithm for direct GPU-based ray-casting using this representation. The TSFSL data structure is able to represent the entire mesh topology in a 1D TSFSL data array of face records, which facilitates the use of efficient 1D texture accesses for visualization. In order to scale to large data sizes, we employ a mesh decomposition into bricks that can be handled independently, where each brick is then composed of its own TSFSL array. This bricking enables memory savings and performance improvements for large meshes. We illustrate the feasibility of our approach with real-world application results, by visualizing highly complex polyhedral data from commercial state-of-the-art simulation packages. © 2011 IEEE.

  20. Grid for Meso american Archaeology

    International Nuclear Information System (INIS)

    Lucet, G.

    2007-01-01

    Meso american archaeology works with large amounts of disperse and diverse information, thus the importance of including new methods that optimise the acquisition, conservation, retrieval, and analysis of data to generate knowledge more efficiently and create a better understanding of history. Further, this information --which includes texts, coordinates, raster graphs, and vector graphs-- comes from a considerable geographical area --parts of Mexico, Nicaragua, Honduras and Costa Rica as well as Guatemala, El Salvador and Belize-- is constantly expanding. This information includes elements like shards, buildings, mural paintings, high and low reliefs, topography, maps, and information about the fauna and soil. Grid computing offers a solution to handle all this information: it respects researchers' need for independence while supplying a platform to share, process and compare the data obtained. Additionally, the Grid can enhance space-time analyses with remote visualisation techniques that can, in turn, incorporate geographical information systems and virtual reality. (Author)

  1. Tomographic array

    International Nuclear Information System (INIS)

    1976-01-01

    The configuration of a tomographic array in which the object can rotate about its axis is described. The X-ray detector is a cylindrical screen perpendicular to the axis of rotation. The X-ray source has a line-shaped focus coinciding with the axis of rotation. The beam is fan-shaped with one side of this fan lying along the axis of rotation. The detector screen is placed inside an X-ray image multiplier tube

  2. Tomographic array

    International Nuclear Information System (INIS)

    1976-01-01

    A tomographic array with the following characteristics is described. An X-ray screen serving as detector is placed before a photomultiplier tube which itself is placed in front of a television camera connected to a set of image processors. The detector is concave towards the source and is replacable. Different images of the object are obtained simultaneously. Optical fibers and lenses are used for transmission within the system

  3. Low cost solar array project cell and module formation research area: Process research of non-CZ silicon material

    Science.gov (United States)

    1981-01-01

    Liquid diffusion masks and liquid applied dopants to replace the CVD Silox masking and gaseous diffusion operations specified for forming junctions in the Westinghouse baseline process sequence for producing solar cells from dendritic web silicon were investigated. The baseline diffusion masking and drive processes were compared with those involving direct liquid applications to the dendritic web silicon strips. Attempts were made to control the number of variables by subjecting dendritic web strips cut from a single web crystal to both types of operations. Data generated reinforced earlier conclusions that efficiency levels at least as high as those achieved with the baseline back junction formation process can be achieved using liquid diffusion masks and liquid dopants. The deliveries of dendritic web sheet material and solar cells specified by the current contract were made as scheduled.

  4. The Adoption of Grid Computing Technology by Organizations: A Quantitative Study Using Technology Acceptance Model

    Science.gov (United States)

    Udoh, Emmanuel E.

    2010-01-01

    Advances in grid technology have enabled some organizations to harness enormous computational power on demand. However, the prediction of widespread adoption of the grid technology has not materialized despite the obvious grid advantages. This situation has encouraged intense efforts to close the research gap in the grid adoption process. In this…

  5. Effect of thermal implying during ageing process of nanorods growth on the properties of zinc oxide nanorod arrays

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, A. S., E-mail: kyrin-samaxi@yahoo.com; Mamat, M. H., E-mail: mhmamat@salam.uitm.edu.my; Rusop, M., E-mail: rusop@salam.uitm.my [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia); NANO-SciTech Centre (NST), Institute of Science (IOS), Universiti Teknologi MARA - UiTM, 40450 Shah Alam, Selangor (Malaysia); Malek, M. F., E-mail: firz-solarzelle@yahoo.com; Abdullah, M. A. R., E-mail: ameerridhwan89@gmail.com; Sin, M. D., E-mail: diyana0366@johor.uitm.edu.my [NANO-ElecTronic Centre (NET), Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor (Malaysia)

    2016-07-06

    Undoped and Sn-doped Zinc oxide (ZnO) nanostructures have been fabricated using a simple sol-gel immersion method at 95°C of growth temperature. Thermal sourced by hot plate stirrer was supplied to the solution during ageing process of nanorods growth. The results showed significant decrement in the quality of layer produced after the immersion process where the conductivity and porosity of the samples reduced significantly due to the thermal appliance. The structural properties of the samples have been characterized using field emission scanning electron microscopy (FESEM) electrical properties has been characterized using current voltage (I-V) measurement.

  6. National transmission grid study

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, Spencer [USDOE Office of the Secretary of Energy, Washington, DC (United States)

    2003-05-31

    The National Energy Policy Plan directed the U.S. Department of Energy (DOE) to conduct a study to examine the benefits of establishing a national electricity transmission grid and to identify transmission bottlenecks and measures to address them. DOE began by conducting an independent analysis of U.S. electricity markets and identifying transmission system bottlenecks using DOE’s Policy Office Electricity Modeling System (POEMS). DOE’s analysis, presented in Section 2, confirms the central role of the nation’s transmission system in lowering costs to consumers through increased trade. More importantly, DOE’s analysis also confirms the results of previous studies, which show that transmission bottlenecks and related transmission system market practices are adding hundreds of millions of dollars to consumers’ electricity bills each year. A more detailed technical overview of the use of POEMS is provided in Appendix A. DOE led an extensive, open, public input process and heard a wide range of comments and recommendations that have all been considered.1 More than 150 participants registered for three public workshops held in Detroit, MI (September 24, 2001); Atlanta, GA (September 26, 2001); and Phoenix, AZ (September 28, 2001).

  7. Lincoln Laboratory Grid

    Data.gov (United States)

    Federal Laboratory Consortium — The Lincoln Laboratory Grid (LLGrid) is an interactive, on-demand parallel computing system that uses a large computing cluster to enable Laboratory researchers to...

  8. CMS computing on grid

    International Nuclear Information System (INIS)

    Guan Wen; Sun Gongxing

    2007-01-01

    CMS has adopted a distributed system of services which implement CMS application view on top of Grid services. An overview of CMS services will be covered. Emphasis is on CMS data management and workload Management. (authors)

  9. Technology Roadmaps: Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    The development of Technology Roadmaps: Smart Grids -- which the IEA defines as an electricity network that uses digital and other advanced technologies to monitor and manage the transport of electricity from all generation sources to meet the varying electricity demands of end users -- is essential if the global community is to achieve shared goals for energy security, economic development and climate change mitigation. Unfortunately, existing misunderstandings of exactly what smart grids are and the physical and institutional complexity of electricity systems make it difficult to implement smart grids on the scale that is needed. This roadmap sets out specific steps needed over the coming years to achieve milestones that will allow smart grids to deliver a clean energy future.

  10. World Wide Grid

    CERN Multimedia

    Grätzel von Grätz, Philipp

    2007-01-01

    Whether for genetic risk analysis or 3D-rekonstruktion of the cerebral vessels: the modern medicine requires more computing power. With a grid infrastructure, this one can be if necessary called by the network. (4 pages)

  11. Spacer grid corner gusset

    International Nuclear Information System (INIS)

    Larson, J.G.

    1984-01-01

    There is provided a spacer grid for a bundle of longitudinally extending rods in spaced generally parallel relationship comprising spacing means for holding the rods in spaced generally parallel relationship; the spacing means includes at least one exterior grid strip circumscribing the bundle of rods along the periphery thereof; with at least one exterior grid strip having a first edge defining the boundary of the strip in one longitudinal direction and a second edge defining the boundary of the strip in the other longitudinal direction; with at least one exterior grid strip having at least one band formed therein parallel to the longitudinal direction; a plurality of corner gussets truncating each of a plurality of corners formed by at least one band and the first edge and the second edge

  12. Smart grids - French Expertise

    International Nuclear Information System (INIS)

    2015-11-01

    The adaptation of electrical systems is the focus of major work worldwide. Bringing electricity to new territories, modernizing existing electricity grids, implementing energy efficiency policies and deploying renewable energies, developing new uses for electricity, introducing electric vehicles - these are the challenges facing a multitude of regions and countries. Smart Grids are the result of the convergence of electrical systems technologies with information and communications technologies. They play a key role in addressing the above challenges. Smart Grid development is a major priority for both public and private-sector actors in France. The experience of French companies has grown with the current French electricity system, a system that already shows extensive levels of 'intelligence', efficiency and competitiveness. French expertise also leverages substantial competence in terms of 'systems engineering', and can provide a tailored response to meet all sorts of needs. French products and services span all the technical and commercial building blocks that make up the Smart Grid value chain. They address the following issues: Improving the use and valuation of renewable energies and decentralized means of production, by optimizing the balance between generation and consumption. Strengthening the intelligence of the transmission and distribution grids: developing 'Supergrid', digitizing substations in transmission networks, and automating the distribution grids are the focus of a great many projects designed to reinforce the 'self-healing' capacity of the grid. Improving the valuation of decentralized flexibilities: this involves, among others, deploying smart meters, reinforcing active energy efficiency measures, and boosting consumers' contribution to grid balancing, via practices such as demand response which implies the aggregation of flexibility among residential, business, and/or industrial sites. Addressing current technological challenges, in

  13. US National Grid

    Data.gov (United States)

    Kansas Data Access and Support Center — This is a polygon feature data layer of United States National Grid (1000m x 1000m polygons ) constructed by the Center for Interdisciplinary Geospatial Information...

  14. Controlling smart grid adaptivity

    NARCIS (Netherlands)

    Toersche, Hermen; Nykamp, Stefan; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2012-01-01

    Methods are discussed for planning oriented smart grid control to cope with scenarios with limited predictability, supporting an increasing penetration of stochastic renewable resources. The performance of these methods is evaluated with simulations using measured wind generation and consumption

  15. Grid Computing Education Support

    Energy Technology Data Exchange (ETDEWEB)

    Steven Crumb

    2008-01-15

    The GGF Student Scholar program enabled GGF the opportunity to bring over sixty qualified graduate and under-graduate students with interests in grid technologies to its three annual events over the three-year program.

  16. Development of a Post-Processing Algorithm for Accurate Human Skull Profile Extraction via Ultrasonic Phased Arrays

    Science.gov (United States)

    Al-Ansary, Mariam Luay Y.

    Ultrasound Imaging has been favored by clinicians for its safety, affordability, accessibility, and speed compared to other imaging modalities. However, the trade-offs to these benefits are a relatively lower image quality and interpretability, which can be addressed by, for example, post-processing methods. One particularly difficult imaging case is associated with the presence of a barrier, such as a human skull, with significantly different acoustical properties than the brain tissue as the target medium. Some methods were proposed in the literature to account for this structure if the skull's geometry is known. Measuring the skull's geometry is therefore an important task that requires attention. In this work, a new edge detection method for accurate human skull profile extraction via post-processing of ultrasonic A-Scans is introduced. This method, referred to as the Selective Echo Extraction algorithm, SEE, processes each A-Scan separately and determines the outermost and innermost boundaries of the skull by means of adaptive filtering. The method can also be used to determine the average attenuation coefficient of the skull. When applied to simulated B-Mode images of the skull profile, promising results were obtained. The profiles obtained from the proposed process in simulations were found to be within 0.15lambda +/- 0.11lambda or 0.09 +/- 0.07mm from the actual profiles. Experiments were also performed to test SEE on skull mimicking phantoms with major acoustical properties similar to those of the actual human skull. With experimental data, the profiles obtained with the proposed process were within 0.32lambda +/- 0.25lambda or 0.19 +/- 0.15mm from the actual profile.

  17. ONIX results: Comparison of grid geometry (BATMAN - ELISE - flat grid)

    Science.gov (United States)

    Revel, Adrien; Mochalskyy, Serhiy; Wünderlich, Dirk; Fantz, Ursel; Minea, Tiberiu

    2017-08-01

    The 3D PIC-MCC code ONIX is dedicated to the modelling of negative hydrogen or deuterium ion extraction and the co-extracted electrons from the plasma in radio-frequency driven sources. The extraction process highly depends on the plasma characteristics close to the plasma grid where it is difficult to obtain experimental data. ONIX brings valuable insights on the plasma behavior in this area. In the code, the numerical treatment of the boundaries have been improved in order to describe with more accuracy the potential and the electric field in this vicinity. The computation time has been reduced by a factor of 2 and the parallelization efficiency has been highly improved. The influence of the magnetic field in BATMAN on the plasma behaviour has been investigated by comparing two different configurations of the magnet bars producing the filter field (internal magnets: x = 3 cm; external magnets: x = 9 cm). A flat grid geometry for the PG instead of the usual conical grid geometry has been studied to evaluate its impact on the extracted current, especially for the negative ions emitted from the surface of the PG. Finally, the ONIX code has been used for the first 3D PIC calculations ever performed for the ELISE experiment.

  18. Smart grid for comfort; Smart grid voor comfort

    Energy Technology Data Exchange (ETDEWEB)

    Zeiler, W.; Van der Velden, J.A.J. [Kropman, Rijswijk (Netherlands); Vissers, D.R.; Maaijen, H.N. [Faculteit Bouwkunde, Technische Universiteit Eindhoven TUE, Eindhoven (Netherlands); Kling, W.L. [Faculteit Electrical Engineering, Technische Universiteit Eindhoven TUE, Eindhoven (Netherlands); Larsen, J.P. [Sense Observation Systems, Rotterdam (Netherlands)

    2012-04-15

    A new control strategy was developed based on the application of wireless sensor network with the connection to a smart grid to investigate if it is possible to save energy on the level of the user under the condition of maintaining the same or even improved level of individual comfort. By using different scenarios, for individual comfort and energy consumption, agents provide the steering of the process control This forms the basis of a new approach to optimize the energy consumption, after which the effect of it can be used on the level of residential building to optimize the interaction with the electrical infrastructure, the smart grid. [Dutch] Er vindt onderzoek plaats naar een nieuwe regelstrategie gebaseerd op de toepassing van een draadloos sensor netwerk dat is gekoppeld aan het smart grid. Doel van deze regelstrategie is om op gebruikersniveau energie te kunnen besparen met behoud of zelfs verbetering van het individueel comfort. Er zijn verschillende scenario's voor individueel comfort en energiegebruik van apparatuur met behulp van agents die voor de aansturing kunnen zorgen. Zo wordt de kern van de energievraag geoptimaliseerd. De doorwerking hiervan tot op het niveau van woninggebouw en de koppeling met het externe elektriciteitsnet kan vervolgens worden geoptimaliseerd.

  19. Summary of the fuel rod support system (grids) design for LWBR (LWBR development program)

    International Nuclear Information System (INIS)

    Richardson, K.D.

    1979-02-01

    Design features of the fuel rod support system (grids) for the Light Water Breeder Reactor (LWBR) installed in the Shippingport Atomic Power Station, Shippingport, Pennsylvania, are described. The grids are fabricated from AM-350 stainless steel and provide lateral support of the fuel rods in the three regions (seed, blanket, and reflector) of the reactor. A comparison is made of the LWBR grids, whose cells are arranged in triangular-pitched arrays, with rod support systems employed in commercial light water reactors

  20. Beyond grid security

    International Nuclear Information System (INIS)

    Hoeft, B; Epting, U; Koenig, T

    2008-01-01

    While many fields relevant to Grid security are already covered by existing working groups, their remit rarely goes beyond the scope of the Grid infrastructure itself. However, security issues pertaining to the internal set-up of compute centres have at least as much impact on Grid security. Thus, this talk will present briefly the EU ISSeG project (Integrated Site Security for Grids). In contrast to groups such as OSCT (Operational Security Coordination Team) and JSPG (Joint Security Policy Group), the purpose of ISSeG is to provide a holistic approach to security for Grid computer centres, from strategic considerations to an implementation plan and its deployment. The generalised methodology of Integrated Site Security (ISS) is based on the knowledge gained during its implementation at several sites as well as through security audits, and this will be briefly discussed. Several examples of ISS implementation tasks at the Forschungszentrum Karlsruhe will be presented, including segregation of the network for administration and maintenance and the implementation of Application Gateways. Furthermore, the web-based ISSeG training material will be introduced. This aims to offer ISS implementation guidance to other Grid installations in order to help avoid common pitfalls

  1. Buckling behavior analysis of spacer grid by lateral impact load

    International Nuclear Information System (INIS)

    Yoon, Kyung Ho; Kang, Heung Seok; Kim, Hyung Kyu; Song, Kee Nam

    2000-05-01

    The spacer grid is one of the main structural components in the fuel assembly, Which supports the fuel rods, guides cooling water, and protects the system from an external impact load, such as earthquakes. Therefore, the mechanical and structural properties of the spacer grids must be extensively examined while designing it. In this report, free fall type shock tests on the several kinds of the specimens of the spacer grids were also carried out in order to compare the results among the candidate grids. A free fall carriage on the specimen accomplishes the test. In addition to this, a finite element method for predicting the critical impact strength of the spacer grids is described. FE method on the buckling behavior of the spacer grids are performed for a various array of sizes of the grids considering that the spacer grid is an assembled structure with thin-walled plates and imposing proper boundary conditions by nonlinear dynamic impact analysis using ABAQUS/explicit code. The simulated results results also similarly predicted the local buckling phenomena and were found to give good correspondence with the shock test results

  2. Low cost solar array project. Cell and module formation research area. Process research of non-CZ silicon material

    Science.gov (United States)

    1983-01-01

    Liquid diffusion masks and liquid dopants to replace the more expensive CVD SiO2 mask and gaseous diffusion processes were investigated. Silicon pellets were prepared in the silicon shot tower; and solar cells were fabricated using web grown where the pellets were used as a replenishment material. Verification runs were made using the boron dopant and liquid diffusion mask materials. The average of cells produced in these runs was 13%. The relationship of sheet resistivity, temperature, gas flows, and gas composition for the diffusion of the P-8 liquid phosphorus solution was investigated. Solar cells processed from web grown from Si shot material were evaluated, and results qualified the use of the material produced in the shot tower for web furnace feed stock.

  3. Silicon materials task of the Low Cost Solar Array Project: Effect of impurities and processing on silicon solar cells

    Science.gov (United States)

    Hopkins, R. H.; Davis, J. R.; Rohatgi, A.; Hanes, M. H.; Rai-Choudhury, P.; Mollenkopf, H. C.

    1982-01-01

    The effects of impurities and processing on the characteristics of silicon and terrestrial silicon solar cells were defined in order to develop cost benefit relationships for the use of cheaper, less pure solar grades of silicon. The amount of concentrations of commonly encountered impurities that can be tolerated in typical p or n base solar cells was established, then a preliminary analytical model from which the cell performance could be projected depending on the kinds and amounts of contaminants in the silicon base material was developed. The impurity data base was expanded to include construction materials, and the impurity performace model was refined to account for additional effects such as base resistivity, grain boundary interactions, thermal processing, synergic behavior, and nonuniform impurity distributions. A preliminary assessment of long term (aging) behavior of impurities was also undertaken.

  4. The Benefits of Grid Networks

    Science.gov (United States)

    Tennant, Roy

    2005-01-01

    In the article, the author talks about the benefits of grid networks. In speaking of grid networks the author is referring to both networks of computers and networks of humans connected together in a grid topology. Examples are provided of how grid networks are beneficial today and the ways in which they have been used.

  5. Chemical analysis of raw and processed Fructus arctii by high-performance liquid chromatography/diode array detection-electrospray ionization-mass spectrometry

    Science.gov (United States)

    Qin, Kunming; Liu, Qidi; Cai, Hao; Cao, Gang; Lu, Tulin; Shen, Baojia; Shu, Yachun; Cai, Baochang

    2014-01-01

    Background: In traditional Chinese medicine (TCM), raw and processed herbs are used to treat the different diseases. Fructus Arctii, the dried fruits of Arctium lappa l. (Compositae), is widely used in the TCM. Stir-frying is the most common processing method, which might modify the chemical compositions in Fructus Arctii. Materials and Methods: To test this hypothesis, we focused on analysis and identification of the main chemical constituents in raw and processed Fructus Arctii (PFA) by high-performance liquid chromatography/diode array detection-electrospray ionization-mass spectrometry. Results: The results indicated that there was less arctiin in stir-fried materials than in raw materials. however, there were higher levels of arctigenin in stir-fried materials than in raw materials. Conclusion: We suggest that arctiin reduced significantly following the thermal conversion of arctiin to arctigenin. In conclusion, this finding may shed some light on understanding the differences in the therapeutic values of raw versus PFA in TCM. PMID:25422559

  6. Smart Grid Integration Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Troxell, Wade [Colorado State Univ., Fort Collins, CO (United States)

    2011-12-22

    The initial federal funding for the Colorado State University Smart Grid Integration Laboratory is through a Congressionally Directed Project (CDP), DE-OE0000070 Smart Grid Integration Laboratory. The original program requested in three one-year increments for staff acquisition, curriculum development, and instrumentation all which will benefit the Laboratory. This report focuses on the initial phase of staff acquisition which was directed and administered by DOE NETL/ West Virginia under Project Officer Tom George. Using this CDP funding, we have developed the leadership and intellectual capacity for the SGIC. This was accomplished by investing (hiring) a core team of Smart Grid Systems engineering faculty focused on education, research, and innovation of a secure and smart grid infrastructure. The Smart Grid Integration Laboratory will be housed with the separately funded Integrid Laboratory as part of CSU's overall Smart Grid Integration Center (SGIC). The period of performance of this grant was 10/1/2009 to 9/30/2011 which included one no cost extension due to time delays in faculty hiring. The Smart Grid Integration Laboratory's focus is to build foundations to help graduate and undergraduates acquire systems engineering knowledge; conduct innovative research; and team externally with grid smart organizations. Using the results of the separately funded Smart Grid Workforce Education Workshop (May 2009) sponsored by the City of Fort Collins, Northern Colorado Clean Energy Cluster, Colorado State University Continuing Education, Spirae, and Siemens has been used to guide the hiring of faculty, program curriculum and education plan. This project develops faculty leaders with the intellectual capacity to inspire its students to become leaders that substantially contribute to the development and maintenance of Smart Grid infrastructure through topics such as: (1) Distributed energy systems modeling and control; (2) Energy and power conversion; (3

  7. Modular and efficient ozone systems based on massively parallel chemical processing in microchannel plasma arrays: performance and commercialization

    Science.gov (United States)

    Kim, M.-H.; Cho, J. H.; Park, S.-J.; Eden, J. G.

    2017-08-01

    Plasmachemical systems based on the production of a specific molecule (O3) in literally thousands of microchannel plasmas simultaneously have been demonstrated, developed and engineered over the past seven years, and commercialized. At the heart of this new plasma technology is the plasma chip, a flat aluminum strip fabricated by photolithographic and wet chemical processes and comprising 24-48 channels, micromachined into nanoporous aluminum oxide, with embedded electrodes. By integrating 4-6 chips into a module, the mass output of an ozone microplasma system is scaled linearly with the number of modules operating in parallel. A 115 g/hr (2.7 kg/day) ozone system, for example, is realized by the combined output of 18 modules comprising 72 chips and 1,800 microchannels. The implications of this plasma processing architecture for scaling ozone production capability, and reducing capital and service costs when introducing redundancy into the system, are profound. In contrast to conventional ozone generator technology, microplasma systems operate reliably (albeit with reduced output) in ambient air and humidity levels up to 90%, a characteristic attributable to the water adsorption/desorption properties and electrical breakdown strength of nanoporous alumina. Extensive testing has documented chip and system lifetimes (MTBF) beyond 5,000 hours, and efficiencies >130 g/kWh when oxygen is the feedstock gas. Furthermore, the weight and volume of microplasma systems are a factor of 3-10 lower than those for conventional ozone systems of comparable output. Massively-parallel plasmachemical processing offers functionality, performance, and commercial value beyond that afforded by conventional technology, and is currently in operation in more than 30 countries worldwide.

  8. The CMS integration grid testbed

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Gregory E.

    2004-08-26

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distribution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuous two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.

  9. Home Area Networks and the Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Clements, Samuel L.; Carroll, Thomas E.; Hadley, Mark D.

    2011-04-01

    With the wide array of home area network (HAN) options being presented as solutions to smart grid challenges for the home, it is time to compare and contrast their strengths and weaknesses. This white paper examines leading and emerging HAN technologies. The emergence of the smart grid is bringing more networking players into the field. The need for low consistent bandwidth usage differs enough from the traditional information technology world to open the door to new technologies. The predominant players currently consist of a blend of the old and new. Within the wired world Ethernet and HomePlug Green PHY are leading the way with an advantage to HomePlug because it doesn't require installing new wires. In the wireless the realm there are many more competitors but WiFi and ZigBee seem to have the most momentum.

  10. Grid today, clouds on the horizon

    Science.gov (United States)

    Shiers, Jamie

    2009-04-01

    By the time of CCP 2008, the largest scientific machine in the world - the Large Hadron Collider - had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy ( 7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" - that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219-223]. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC'08) - aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change - as always - is on the horizon. The current funding model for Grids - which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America - is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of "Cloud Computing" are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term.

  11. Mini-grid Policy Tool-kit. Policy and business frameworks for successful mini-grid roll-outs

    International Nuclear Information System (INIS)

    Franz, Michael; Hayek, Niklas; Peterschmidt, Nico; Rohrer, Michael; Kondev, Bozhil; Adib, Rana; Cader, Catherina; Carter, Andrew; George, Peter; Gichungi, Henry; Hankins, Mark; Kappiah, Mahama; Mangwengwende, Simbarashe E.

    2014-01-01

    The Mini-grid Policy Tool-kit is for policy makers to navigate the mini-grid policy design process. It contains information on mini-grid operator models, the economics of mini-grids, and necessary policy and regulation that must be considered for successful implementation. The publication specifically focuses on Africa. Progress on extending the electricity grid in many countries has remained slow because of high costs of gird-extension and limited utility/state budgets for electrification. Mini-grids provide an affordable and cost-effective option to extend needed electricity services. Putting in place the right policy for min-grid deployment requires considerable effort but can yield significant improvement in electricity access rates as examples from Kenya, Senegal and Tanzania illustrate. The tool-kit is available in English, French and Portuguese

  12. Preliminary thermal analysis of grids for twin source extraction system

    International Nuclear Information System (INIS)

    Pandey, Ravi; Bandyopadhyay, Mainak; Chakraborty, Arun K.

    2017-01-01

    The TWIN (Two driver based Indigenously built Negative ion source) source provides a bridge between the operational single driver based negative ion source test facility, ROBIN in IPR and an ITER-type multi driver based ion source. The source is designed to be operated in CW mode with 180kW, 1MHz, 5s ON/600s OFF duty cycle and also in 5Hz modulation mode with 3s ON/20s OFF duty cycle for 3 such cycle. TWIN source comprises of ion source sub-assembly (consist of driver and plasma box) and extraction system sub-assembly. Extraction system consists of Plasma grid (PG), extraction grid (EG) and Ground grid (GG) sub assembly. Negative ion beams produced at plasma grid seeing the plasma side of ion source will receive moderate heat flux whereas the extraction grid and ground grid would be receiving majority of heat flux from extracted negative ion and co-extracted electron beams. Entire Co-extracted electron beam would be dumped at extraction grid via electron deflection magnetic field making the requirement of thermal and hydraulic design for extraction grid to be critical. All the three grids are made of OFHC Copper and would be actively water cooled keeping the peak temperature rise of grid surface within allowable limit with optimum uniformity. All the grids are to be made by vacuum brazing process where joint strength becomes crucial at elevated temperature. Hydraulic design must maintain the peak temperature at the brazing joint within acceptable limit

  13. Quality Assurance Framework for Mini-Grids

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, Ian [National Renewable Energy Lab. (NREL), Golden, CO (United States); Burman, Kari [National Renewable Energy Lab. (NREL), Golden, CO (United States); Singh, Mohit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Esterly, Sean [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mutiso, Rose [US Department of Energy, Washington, DC (United States); McGregor, Caroline [US Department of Energy, Washington, DC (United States)

    2016-11-01

    Providing clean and affordable energy services to the more than 1 billion people globally who lack access to electricity is a critical driver for poverty reduction, economic development, improved health, and social outcomes. More than 84% of populations without electricity are located in rural areas where traditional grid extension may not be cost-effective; therefore, distributed energy solutions such as mini-grids are critical. To address some of the root challenges of providing safe, quality, and financially viable mini-grid power systems to remote customers, the U.S. Department of Energy (DOE) teamed with the National Renewable Energy Laboratory (NREL) to develop a Quality Assurance Framework (QAF) for isolated mini-grids. The QAF for mini-grids aims to address some root challenges of providing safe, quality, and affordable power to remote customers via financially viable mini-grids through two key components: (1) Levels of service: Defines a standard set of tiers of end-user service and links them to technical parameters of power quality, power availability, and power reliability. These levels of service span the entire energy ladder, from basic energy service to high-quality, high-reliability, and high-availability service (often considered 'grid parity'); (2) Accountability and performance reporting framework: Provides a clear process of validating power delivery by providing trusted information to customers, funders, and/or regulators. The performance reporting protocol can also serve as a robust monitoring and evaluation tool for mini-grid operators and funding organizations. The QAF will provide a flexible alternative to rigid top-down standards for mini-grids in energy access contexts, outlining tiers of end-user service and linking them to relevant technical parameters. In addition, data generated through implementation of the QAF will provide the foundation for comparisons across projects, assessment of impacts, and greater confidence that

  14. GridCom, Grid Commander: graphical interface for Grid jobs and data management

    International Nuclear Information System (INIS)

    Galaktionov, V.V.

    2011-01-01

    GridCom - the software package for maintenance of automation of access to means of distributed system Grid (jobs and data). The client part, executed in the form of Java-applets, realises the Web-interface access to Grid through standard browsers. The executive part Lexor (LCG Executor) is started by the user in UI (User Interface) machine providing performance of Grid operations

  15. Fabrication of three-dimensional ordered nanodot array structures by a thermal dewetting method

    International Nuclear Information System (INIS)

    Li Zhenxing; Yoshino, Masahiko; Yamanaka, Akinori

    2012-01-01

    A new fabrication method for three-dimensional nanodot arrays with low cost and high throughput is developed in this paper. In this process, firstly a 2D nanodot array is fabricated by combination of top-down and bottom-up approaches. A nanoplastic forming technique is utilized as the top-down approach to fabricate a groove grid pattern on an Au layer deposited on a substrate, and self-organization by thermal dewetting is employed as the bottom-up approach. On the first-layer nanodot array, SiO 2 is deposited as a spacer layer. Au is then deposited on the spacer layer and thermal dewetting is conducted to fabricate a second-layer nanodot array. The effective parameters influencing dot formation on the second layer, including Au layer thickness and SiO 2 layer thickness, are studied. It is demonstrated that a 3D nanodot array of good vertical alignment is obtained by repeating the SiO 2 deposition, Au deposition and thermal dewetting. The mechanism of the dot agglomeration process is studied based on geometrical models. The effects of the spacer layer thickness and Au layer thickness on the morphology and alignment of the second-layer dots are discussed. (paper)

  16. Grid and Entrepreneurship Workshop

    CERN Multimedia

    2006-01-01

    The CERN openlab is organising a special workshop about Grid opportunities for entrepreneurship. This one-day event will provide an overview of what is involved in spin-off technology, with a special reference to the context of computing and data Grids. Lectures by experienced entrepreneurs will introduce the key concepts of entrepreneurship and review, in particular, the industrial potential of EGEE (the EU co-funded Enabling Grids for E-sciencE project, led by CERN). Case studies will be given by CEOs of European start-ups already active in the Grid and computing cluster area, and regional experts will provide an overview of efforts in several European regions to stimulate entrepreneurship. This workshop is designed to encourage students and researchers involved or interested in Grid technology to consider the entrepreneurial opportunities that this technology may create in the coming years. This workshop is organized as part of the CERN openlab student programme, which is co-sponsored by CERN, HP, ...

  17. For smart electric grids

    International Nuclear Information System (INIS)

    Tran Thiet, Jean-Paul; Leger, Sebastien; Bressand, Florian; Perez, Yannick; Bacha, Seddik; Laurent, Daniel; Perrin, Marion

    2012-01-01

    The authors identify and discuss the main challenges faced by the French electric grid: the management of electricity demand and the needed improvement of energy efficiency, the evolution of consumer's state of mind, and the integration of new production capacities. They notably outline that France have been living until recently with an electricity abundance, but now faces the highest consumption peaks in Europe, and is therefore facing higher risks of power cuts. They also notice that the French energy mix is slowly evolving, and outline the problems raised by the fact that renewable energies which are to be developed, are decentralised and intermittent. They propose an overview of present developments of smart grids, and outline their innovative characteristics, challenges raised by their development and compare international examples. They show that smart grids enable a better adapted supply and decentralisation. A set of proposals is formulated about how to finance and to organise the reconfiguration of electric grids, how to increase consumer's responsibility for peak management and demand management, how to create the conditions of emergence of a European market of smart grids, and how to support self-consumption and the building-up of an energy storage sector

  18. Proteolytic processing of the cilium adhesin MHJ_0194 (P123J ) in Mycoplasma hyopneumoniae generates a functionally diverse array of cleavage fragments that bind multiple host molecules.

    Science.gov (United States)

    Raymond, Benjamin B A; Jenkins, Cheryl; Seymour, Lisa M; Tacchi, Jessica L; Widjaja, Michael; Jarocki, Veronica M; Deutscher, Ania T; Turnbull, Lynne; Whitchurch, Cynthia B; Padula, Matthew P; Djordjevic, Steven P

    2015-03-01

    Mycoplasma hyopneumoniae, the aetiological agent of porcine enzootic pneumonia, regulates the presentation of proteins on its cell surface via endoproteolysis, including those of the cilial adhesin P123 (MHJ_0194). These proteolytic cleavage events create functional adhesins that bind to proteoglycans and glycoproteins on the surface of ciliated and non-ciliated epithelial cells and to the circulatory host molecule plasminogen. Two dominant cleavage events of the P123 preprotein have been previously characterized; however, immunoblotting studies suggest that more complex processing events occur. These extensive processing events are characterized here. The functional significance of the P97 cleavage fragments is also poorly understood. Affinity chromatography using heparin, fibronectin and plasminogen as bait and peptide arrays were used to expand our knowledge of the adhesive capabilities of P123 cleavage fragments and characterize a novel binding motif in the C-terminus of P123. Further, we use immunohistochemistry to examine in vivo, the biological significance of interactions between M. hyopneumoniae and fibronectin and show that M. hyopneumoniae induces fibronectin deposition at the site of infection on the ciliated epithelium. Our data supports the hypothesis that M. hyopneumoniae possesses the molecular machinery to influence key molecular communication pathways in host cells. © 2014 John Wiley & Sons Ltd.

  19. Image processing with cellular nonlinear networks implemented on field-programmable gate arrays for real-time applications in nuclear fusion

    International Nuclear Information System (INIS)

    Palazzo, S.; Vagliasindi, G.; Arena, P.; Murari, A.; Mazon, D.; De Maack, A.

    2010-01-01

    In the past years cameras have become increasingly common tools in scientific applications. They are now quite systematically used in magnetic confinement fusion, to the point that infrared imaging is starting to be used systematically for real-time machine protection in major devices. However, in order to guarantee that the control system can always react rapidly in case of critical situations, the time required for the processing of the images must be as predictable as possible. The approach described in this paper combines the new computational paradigm of cellular nonlinear networks (CNNs) with field-programmable gate arrays and has been tested in an application for the detection of hot spots on the plasma facing components in JET. The developed system is able to perform real-time hot spot recognition, by processing the image stream captured by JET wide angle infrared camera, with the guarantee that computational time is constant and deterministic. The statistical results obtained from a quite extensive set of examples show that this solution approximates very well an ad hoc serial software algorithm, with no false or missed alarms and an almost perfect overlapping of alarm intervals. The computational time can be reduced to a millisecond time scale for 8 bit 496x560-sized images. Moreover, in our implementation, the computational time, besides being deterministic, is practically independent of the number of iterations performed by the CNN - unlike software CNN implementations.

  20. Investigation of thick grid plasma switches for thermionic system output voltage

    International Nuclear Information System (INIS)

    Alekseev, N.I.; Kaplan, V.B.; Martsinovski, A.M.

    1992-01-01

    Plasma switches (Cs and Cs-Ba tacitrons PS) with thick grid have grid with thickness more than mesh aperture size. These grids have some advantages as compared with small-scale/1.2/ones. For instance, much more electrical strength. This paper contains the thick-grid investigation results: the grid controls efficiency, the plasma parameters, probe researches of these parameters at conductive state and their variety during the process of quenching. The results showed the thick-grid PS plasma differed from the thin-grid PS significantly at the stationary state as well as by quenching dynamic features

  1. Future electrical distribution grids: Smart Grids

    International Nuclear Information System (INIS)

    Hadjsaid, N.; Sabonnadiere, J.C.; Angelier, J.P.

    2010-01-01

    The new energy paradigm faced by distribution network represents a real scientific challenge. Thus, national and EU objectives in terms of environment and energy efficiency with resulted regulatory incentives for renewable energies, the deployment of smart meters and the need to respond to changing needs including new uses related to electric and plug-in hybrid vehicles introduce more complexity and favour the evolution towards a smarter grid. The economic interest group in Grenoble IDEA in connection with the power laboratory G2ELab at Grenoble Institute of technology, EDF and Schneider Electric are conducting research on the electrical distribution of the future in presence of distributed generation for ten years.Thus, several innovations emerged in terms of flexibility and intelligence of the distribution network. One can notice the intelligence solutions for voltage control, the tools of network optimization, the self-healing techniques, the innovative strategies for connecting distributed and intermittent generation or load control possibilities for the distributor. All these innovations are firmly in the context of intelligent networks of tomorrow 'Smart Grids'. (authors)

  2. Microneedle array electrode for human EEG recording.

    NARCIS (Netherlands)

    Lüttge, Regina; van Nieuwkasteele-Bystrova, Svetlana Nikolajevna; van Putten, Michel Johannes Antonius Maria; Vander Sloten, Jos; Verdonck, Pascal; Nyssen, Marc; Haueisen, Jens

    2009-01-01

    Microneedle array electrodes for EEG significantly reduce the mounting time, particularly by circumvention of the need for skin preparation by scrubbing. We designed a new replication process for numerous types of microneedle arrays. Here, polymer microneedle array electrodes with 64 microneedles,

  3. Data Compression of Hydrocarbon Reservoir Simulation Grids

    KAUST Repository

    Chavez, Gustavo Ivan

    2015-05-28

    A dense volumetric grid coming from an oil/gas reservoir simulation output is translated into a compact representation that supports desired features such as interactive visualization, geometric continuity, color mapping and quad representation. A set of four control curves per layer results from processing the grid data, and a complete set of these 3-dimensional surfaces represents the complete volume data and can map reservoir properties of interest to analysts. The processing results yield a representation of reservoir simulation results which has reduced data storage requirements and permits quick performance interaction between reservoir analysts and the simulation data. The degree of reservoir grid compression can be selected according to the quality required, by adjusting for different thresholds, such as approximation error and level of detail. The processions results are of potential benefit in applications such as interactive rendering, data compression, and in-situ visualization of large-scale oil/gas reservoir simulations.

  4. Seismicity And Accretion Processes Along The Mid-Atlantic Ridge south of the Azores using data from the MARCHE Autonomous Hydrophone Array

    Science.gov (United States)

    Perrot, Julie; Cevatoglu, Melis; Cannat, Mathilde; Escartin, Javier; Maia, Marcia; Tisseau, Chantal; Dziak, Robert; Goslin, Jean

    2013-04-01

    The seismicity of the South Atlantic Ocean has been recorded by the MARCHE network of 4 autonomous underwater hydrophones (AUH) moored within the SOFAR channel on the flanks of the Mid-Atlantic Ridge (MAR). The instruments were deployed south of the Azores Plateau between 32° and 39°N from July 2005 to August 2008. The low attenuation properties of the SOFAR channel for earthquake T-wave propagation result in a detection threshold reduction from a magnitude completeness level (Mc) of ~4.3 for MAR events recorded by the land-based seismic networks to Mc=2.1 using this hydrophone array. A spatio-temporal analysis has been performed among the 5600 events recorded inside the MARCHE array. Most events are distributed along the ridge between lat. 39°N on the Azores Platform and the Rainbow (36°N) segment. In the hydrophone catalogue, acoustic magnitude (Source Level, SL) is used as a measure of earthquake size. The source level above which the data set is complete is SLc=205 dB. We look for seismic swarms using the cluster software of the SEISAN package. The criterion used are a minimum SL of 210 to detect a possible mainshock, and a radius of 30 km and a time window of 40 days after this mainshock (Cevatoglu, 2010, Goslin et al., 2012). 7 swarms with more than 15 events are identified using this approach between 32°et 39°N of latitude. The maximum number of earthquake in a swarm is 57 events. This result differs from the study of Simao et al. (2010) as we processed a further year of data and selected sequences with fewer events. Looking at the distribution of the SL as a function of time after the mainshock, we discuss the possible mechanism of these earthquakes : tectonic events with a "mainshock-aftershock" distribution fitting a modified Omori law or volcanic events showing more constant SL values. We also present the geophysical setting of these 7 swarms, using gravity, bathymetry, and available local geological data. This study illustrates the potential of

  5. Experiences of a grid connected solar array energy production

    Science.gov (United States)

    Hagymássy, Zoltán; Vántus, András

    2015-04-01

    Solar energy possibilities of Hungary are higher than in Central Europe generally. The Institute for Land Utilisation, Technology and Regional Development of the University of Debrecen installed a photovoltaic (PV) system. The PV system is structured into 3 subsystems (fields). The first subsystem has 24 pieces of Kyocera KC 120 W type modules, the second subsystem has 72 pieces of Siemens ST 40W, and the remaining has 72 pieces of Dunasolar DS 40W In order to be operable independently of each other three inverter modules (SB 2500) had been installed. The recorder can be connected directly to a desktop PC. Operating and meteorological dates are recorded by MS Excel every 15 minutes. The power plant is connected to a weather station, which contents a PT 100 type temperature and humidity combined measuring instrument, a CM 11 pyranometer, and a wind speed measuring instrument. The produced DC, and AC power, together with the produced energy are as well, and the efficiency can be determined for each used PV technology. The measured operating and meteorological dates are collected by Sunny Boy Control, produced by the SMA. The energy productions of the subsystems are measured continually and the subsystems are measured separately. As an expected, the produced energy of polycrystalline -Si PV module and monocrystalline -Si PV was higher than amorphous-Si PV module. It is well known that energy analysis is more suitable for energy balance when we design a system. The air temperature and the temperature of the panels and the global irradiation conditions were measured. In summertime the panel temperature reaches 60-80 degrees in a sunny day. The panel temperatures are in a spring sunny day approximately 30-40 degrees. It can be concluded that the global irradiation is a major impact feature to influence the amount of energy produced. The efficiency depends on several parameters (spectral distribution of the incoming light, temperature values, etc.). The energy efficiency of a PV system in general can be defined as the ratio of the output energy of the system to the input energy received on the photovoltaic surface. As an expected, the energy efficiencies of polycrystalline -Si PV module and monocrystalline -Si PV was higher than amorphous-Si PV module. Based on our study, in general it can be concluded that the energy efficiency is lower than theoretical.

  6. Failure probability analysis of optical grid

    Science.gov (United States)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  7. Standard Procedure for Grid Interaction Analysis

    International Nuclear Information System (INIS)

    Svensson, Bertil; Lindahl, Sture; Karlsson, Daniel; Joensson, Jonas; Heyman, Fredrik

    2015-01-01

    Grid events, simultaneously affecting all safety related auxiliary systems in a nuclear power plant, are critical and must be carefully addressed in the design, upgrading and operational processes. Up to now, the connecting grid has often been treated as either fully available or totally unavailable, and too little attention has been paid to specify the grid performance criteria. This paper deals with standard procedures for grid interaction analysis, to derive tools and criteria to handle grid events challenging the safety systems of the plant. Critical external power system events are investigated and characterised, with respect to severity and rate of occurrence. These critical events are then grouped with respect to impact on the safety systems, when a disturbance propagates into the plant. It is then important to make sure that 1) the impact of the disturbance will never reach any critical system, 2) the impact of the disturbance will be eliminated before it will hurt any critical system, or 3) the critical systems will be proven to be designed in such a way that they can withstand the impact of the disturbance, and the associated control and protection systems can withstand voltage and frequency transients associated with the disturbances. A number of representative disturbance profiles, reflecting connecting grid conditions, are therefore derived, to be used for equipment testing. (authors)

  8. Array processor architecture

    Science.gov (United States)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  9. Grids, Clouds and Virtualization

    CERN Document Server

    Cafaro, Massimo

    2011-01-01

    Research into grid computing has been driven by the need to solve large-scale, increasingly complex problems for scientific applications. Yet the applications of grid computing for business and casual users did not begin to emerge until the development of the concept of cloud computing, fueled by advances in virtualization techniques, coupled with the increased availability of ever-greater Internet bandwidth. The appeal of this new paradigm is mainly based on its simplicity, and the affordable price for seamless access to both computational and storage resources. This timely text/reference int

  10. Smart Grid Architectures

    DEFF Research Database (Denmark)

    Dondossola, Giovanna; Terruggia, Roberta; Bessler, Sandford

    2014-01-01

    The scope of this paper is to address the evolution of distribution grid architectures following the widespread introduction of renewable energy sources. The increasing connection of distributed resources has a strong impact on the topology and the control functionality of the current distribution...... grids requiring the development of new Information and Communication Technology (ICT) solutions with various degrees of adaptation of the monitoring, communication and control technologies. The costs of ICT based solutions need however to be taken into account, hence it is desirable to work...

  11. Instant jqGrid

    CERN Document Server

    Manricks, Gabriel

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. A step-by-step, practical Starter book, Instant jqGrid embraces you while you take your first steps, and introduces you to the content in an easy-to-follow order.This book is aimed at people who have some knowledge of HTML and JavaScript. Knowledge of PHP and SQL would also prove to be beneficial. No prior knowledge of jqGrid is expected.

  12. Smart Grid, Smart Europe

    OpenAIRE

    VITIELLO SILVIA; FULLI Gianluca; MENGOLINI Anna Maria

    2013-01-01

    Le smart grid, o reti elettriche intelligenti, aprono la strada a nuove applicazioni con conseguenze di vasta portata per l’intero sistema elettrico, tra le quali la principale è la capacità di integrare nella rete esistente più fonti di energia rinnovabili (FER), veicoli elettrici e fonti di generazione distribuita. Le smart grid inoltre garantiscono una più efficiente ed affidabile risposta alla domanda di energia, sia da un punto di vista tecnico, permettendo un monitoraggio e un controll...

  13. Distributed photovoltaic grid transformers

    CERN Document Server

    Shertukde, Hemchandra Madhusudan

    2014-01-01

    The demand for alternative energy sources fuels the need for electric power and controls engineers to possess a practical understanding of transformers suitable for solar energy. Meeting that need, Distributed Photovoltaic Grid Transformers begins by explaining the basic theory behind transformers in the solar power arena, and then progresses to describe the development, manufacture, and sale of distributed photovoltaic (PV) grid transformers, which help boost the electric DC voltage (generally at 30 volts) harnessed by a PV panel to a higher level (generally at 115 volts or higher) once it is

  14. Chunking of Large Multidimensional Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  15. Efficient Pseudorecursive Evaluation Schemes for Non-adaptive Sparse Grids

    KAUST Repository

    Buse, Gerrit

    2014-01-01

    In this work we propose novel algorithms for storing and evaluating sparse grid functions, operating on regular (not spatially adaptive), yet potentially dimensionally adaptive grid types. Besides regular sparse grids our approach includes truncated grids, both with and without boundary grid points. Similar to the implicit data structures proposed in Feuersänger (Dünngitterverfahren für hochdimensionale elliptische partielle Differntialgleichungen. Diploma Thesis, Institut für Numerische Simulation, Universität Bonn, 2005) and Murarasu et al. (Proceedings of the 16th ACM Symposium on Principles and Practice of Parallel Programming. Cambridge University Press, New York, 2011, pp. 25–34) we also define a bijective mapping from the multi-dimensional space of grid points to a contiguous index, such that the grid data can be stored in a simple array without overhead. Our approach is especially well-suited to exploit all levels of current commodity hardware, including cache-levels and vector extensions. Furthermore, this kind of data structure is extremely attractive for today’s real-time applications, as it gives direct access to the hierarchical structure of the grids, while outperforming other common sparse grid structures (hash maps, etc.) which do not match with modern compute platforms that well. For dimensionality d ≤ 10 we achieve good speedups on a 12 core Intel Westmere-EP NUMA platform compared to the results presented in Murarasu et al. (Proceedings of the International Conference on Computational Science—ICCS 2012. Procedia Computer Science, 2012). As we show, this also holds for the results obtained on Nvidia Fermi GPUs, for which we observe speedups over our own CPU implementation of up to 4.5 when dealing with moderate dimensionality. In high-dimensional settings, in the order of tens to hundreds of dimensions, our sparse grid evaluation kernels on the CPU outperform any other known implementation.

  16. Grid and Data Analyzing and Security

    Directory of Open Access Journals (Sweden)

    Fatemeh SHOKRI

    2012-12-01

    Full Text Available This paper examines the importance of secure structures in the process of analyzing and distributing information with aid of Grid-based technologies. The advent of distributed network has provided many practical opportunities for detecting and recording the time of events, and made efforts to identify the events and solve problems of storing information such as being up-to-date and documented. In this regard, the data distribution systems in a network environment should be accurate. As a consequence, a series of continuous and updated data must be at hand. In this case, Grid is the best answer to use data and resource of organizations by common processing.

  17. Semantic service integration for smart grids

    CERN Document Server

    Rohjans, S

    2012-01-01

    The scope of the research presented includes semantic-based integration of data services in smart grids achieved through following the proposed (S²)In-approach developed corresponding to design science guidelines. This approach identifies standards and specifications, which are integrated in order to build the basis for the (S²)In-architecture. A process model is introduced in the beginning, which serves as framework for developing the target architecture. The first step of the process stipulates to define requirements for smart grid ICT-architectures being derived from established studies and

  18. Secure Protocol and IP Core for Configuration of Networking Hardware IPs in the Smart Grid

    Directory of Open Access Journals (Sweden)

    Marcelo Urbina

    2018-02-01

    Full Text Available Nowadays, the incorporation and constant evolution of communication networks in the electricity sector have given rise to the so-called Smart Grid, which is why it is necessary to have devices that are capable of managing new communication protocols, guaranteeing the strict requirements of processing required by the electricity sector. In this context, intelligent electronic devices (IEDs with network architectures are currently available to meet the communication, real-time processing and interoperability requirements of the Smart Grid. The new generation IEDs include an Field Programmable Gate Array (FPGA, to support specialized networking switching architectures for the electric sector, as the IEEE 1588-aware High-availability Seamless Redundancy/Parallel Redundancy Protocol (HSR/PRP. Another advantage to using an FPGA is the ability to update or reconfigure the design to support new requirements that are being raised to the standards (IEC 61850. The update of the architecture implemented in the FPGA can be done remotely, but it is necessary to establish a cyber security mechanism since the communication link generates vulnerability in the case the attacker gains physical access to the network. The research presented in this paper proposes a secure protocol and Intellectual Property (IP core for configuring and monitoring the networking IPs implemented in a Field Programmable Gate Array (FPGA. The FPGA based implementation proposed overcomes this issue using a light Layer-2 protocol fully implemented on hardware and protected by strong cryptographic algorithms (AES-GCM, defined in the IEC 61850-90-5 standard. The proposed secure protocol and IP core are applicable in any field where remote configuration over Ethernet is required for IP cores in FPGAs. In this paper, the proposal is validated in communications hardware for Smart Grids.

  19. Grid computing the European Data Grid Project

    CERN Document Server

    Segal, B; Gagliardi, F; Carminati, F

    2000-01-01

    The goal of this project is the development of a novel environment to support globally distributed scientific exploration involving multi- PetaByte datasets. The project will devise and develop middleware solutions and testbeds capable of scaling to handle many PetaBytes of distributed data, tens of thousands of resources (processors, disks, etc.), and thousands of simultaneous users. The scale of the problem and the distribution of the resources and user community preclude straightforward replication of the data at different sites, while the aim of providing a general purpose application environment precludes distributing the data using static policies. We will construct this environment by combining and extending newly emerging "Grid" technologies to manage large distributed datasets in addition to computational elements. A consequence of this project will be the emergence of fundamental new modes of scientific exploration, as access to fundamental scientific data is no longer constrained to the producer of...

  20. Enabling Campus Grids with Open Science Grid Technology

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Pordes, Ruth; Bockelman, Brian; Swanson, David

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  1. Enabling campus grids with open science grid technology

    Energy Technology Data Exchange (ETDEWEB)

    Weitzel, Derek [Nebraska U.; Bockelman, Brian [Nebraska U.; Swanson, David [Nebraska U.; Fraser, Dan [Argonne; Pordes, Ruth [Fermilab

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  2. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    Science.gov (United States)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  3. Condensin suppresses recombination and regulates double-strand break processing at the repetitive ribosomal DNA array to ensure proper chromosome segregation during meiosis in budding yeast

    Science.gov (United States)

    Li, Ping; Jin, Hui; Yu, Hong-Guo

    2014-01-01

    During meiosis, homologues are linked by crossover, which is required for bipolar chromosome orientation before chromosome segregation at anaphase I. The repetitive ribosomal DNA (rDNA) array, however, undergoes little or no meiotic recombination. Hyperrecombination can cause chromosome missegregation and rDNA copy number instability. We report here that condensin, a conserved protein complex required for chromosome organization, regulates double-strand break (DSB) formation and repair at the rDNA gene cluster during meiosis in budding yeast. Condensin is highly enriched at the rDNA region during prophase I, released at the prophase I/metaphase I transition, and reassociates with rDNA before anaphase I onset. We show that condensin plays a dual role in maintaining rDNA stability: it suppresses the formation of Spo11-mediated rDNA breaks, and it promotes DSB processing to ensure proper chromosome segregation. Condensin is unnecessary for the export of rDNA breaks outside the nucleolus but required for timely repair of meiotic DSBs. Our work reveals that condensin coordinates meiotic recombination with chromosome segregation at the repetitive rDNA sequence, thereby maintaining genome integrity. PMID:25103240

  4. Evaluation of Drying Process on the Composition of Black Pepper Ethanolic Extract by High Performance Liquid Chromatography With Diode Array Detector

    Science.gov (United States)

    Namjoyan, Foroogh; Hejazi, Hoda; Ramezani, Zahra

    2012-01-01

    Background Black pepper (Piper nigrum) is one of the well-known spices extensively used worldwide especially in India, and Southeast Asia. The presence of alkaloids in the pepper, namely, piperine and its three stereoisomers, isopiperine, chavicine and isochavicine are well noticed. Objectives The current study evaluated the effect of lyophilization and oven drying on the stability and decomposition of constituents of black pepper ethanolic extract. Materials and Methods In the current study ethanolic extract of black pepper obtained by maceration method was dried using two methods. The effect of freeze and oven drying on the chemical composition of the extract especially piperine and its three isomers were evaluated by HPLC analysis of the ethanolic extract before and after drying processes using diode array detector. The UV Vis spectra of the peaks at piperine retention time before and after each drying methods indicated maximum absorbance at 341.2 nm corresponding to standard piperine. Results The results indicated a decrease in intensity of the chromatogram peaks at approximately all retention times after freeze drying, indicating a few percent loss of piperine and its isomers upon lyophilization. Two impurity peaks were completely removed from the extract. Conclusions In oven dried samples two of the piperine stereoisomers were completely removed from the extract and the intensity of piperine peak was increased. PMID:24624176

  5. Utah Bouguer Gravity Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 2.5 kilometer Bouguer anomaly grid for the state of Utah. Number of columns is 196 and number of rows is 245. The order of the data is from the lower left to the...

  6. Modelling Chinese Smart Grid

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    In this document, we consider a specific Chinese Smart Grid implementation and try to address the verification problem for certain quantitative properties including performance and battery consumption. We employ stochastic model checking approach and present our modelling and analysis study using...

  7. Multi-Grid Lanczos

    Science.gov (United States)

    Clark, M. A.; Jung, Chulwoo; Lehner, Christoph

    2018-03-01

    We present a Lanczos algorithm utilizing multiple grids that reduces the memory requirements both on disk and in working memory by one order of magnitude for RBC/UKQCD's 48I and 64I ensembles at the physical pion mass. The precision of the resulting eigenvectors is on par with exact deflation.

  8. Multi-Grid Lanczos

    Directory of Open Access Journals (Sweden)

    Clark M. A.

    2018-01-01

    Full Text Available We present a Lanczos algorithm utilizing multiple grids that reduces the memory requirements both on disk and in working memory by one order of magnitude for RBC/UKQCD’s 48I and 64I ensembles at the physical pion mass. The precision of the resulting eigenvectors is on par with exact deflation.

  9. Nevada Isostatic Gravity Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 2 kilometer Isostatic anomaly grid for the state of Nevada. Number of columns is 269 and number of rows is 394. The order of the data is from the lower left to the...

  10. Steering the Smart Grid

    NARCIS (Netherlands)

    Molderink, Albert; Bakker, Vincent; Bosman, M.G.C.; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    Increasing energy prices and the greenhouse effect lead to more awareness of energy efficiency of electricity supply. During the last years, a lot of technologies and optimization methodologies were developed to increase the efficiency, maintain the grid stability and support large scale

  11. Cutback for grid operators

    International Nuclear Information System (INIS)

    Meulmeester, P.; De Laat, J.

    2006-01-01

    The Netherlands Competition Authority (NMa), in which the Office of Energy Regulation (DTe) is included plan to decrease the capital cost compensation (or weighted average cost of capital or WACC) for grid operators. In this article it is explained how the compensation is calculated, why this measure will be taken and what the effects of this cutback are [nl

  12. Autonomous Energy Grids: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Kroposki, Benjamin D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bernstein, Andrey [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-04

    With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performance while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.

  13. Bolivian Bouguer Anomaly Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 1 kilometer Bouguer anomaly grid for the country of Bolivia.Number of columns is 550 and number of rows is 900. The order of the data is from the lower left to the...

  14. Smart grid voor comfort

    NARCIS (Netherlands)

    Zeiler, W.; Vissers, D.R.; Maaijen, H.N.; Kling, W.L.; Velden, van der J.A.J.; Larsen, J.P.

    2012-01-01

    Er vindt onderzoek plaats naar een nieuwe regelstrategie gebaseerd op de toepassing van een draadloos sensor netwerk dat is gekoppeld aan het smart grid. Doel van deze regelstrategie is om op gebruikersniveau energie te kunnen besparen met behoud of zelfs verbetering van het individueel comfort. Er

  15. Kids Enjoy Grids

    CERN Multimedia

    2007-01-01

    I want to come back and work here when I'm older,' was the spontaneous reaction of one of the children invited to CERN by the Enabling Grids for E-sciencE project for a 'Grids for Kids' day at the end of January. The EGEE project is led by CERN, and the EGEE gender action team organized the day to introduce children to grid technology at an early age. The school group included both boys and girls, aged 9 to 11. All of the presenters were women. 'In general, before this visit, the children thought that scientists always wore white coats and were usually male, with wild Einstein-like hair,' said Jackie Beaver, the class's teacher at the Institut International de Lancy, a school near Geneva. 'They were surprised and pleased to see that women became scientists, and that scientists were quite 'normal'.' The half-day event included presentations about why Grids are needed, a visit of the computer centre, some online games, and plenty of time for questions. In the end, everyone agreed that it was a big success a...

  16. Reconsidering solar grid parity

    International Nuclear Information System (INIS)

    Yang, C.-J.

    2010-01-01

    Grid parity-reducing the cost of solar energy to be competitive with conventional grid-supplied electricity-has long been hailed as the tipping point for solar dominance in the energy mix. Such expectations are likely to be overly optimistic. A realistic examination of grid parity suggests that the cost-effectiveness of distributed photovoltaic (PV) systems may be further away than many are hoping for. Furthermore, cost-effectiveness may not guarantee commercial competitiveness. Solar hot water technology is currently far more cost-effective than photovoltaic technology and has already reached grid parity in many places. Nevertheless, the market penetration of solar water heaters remains limited for reasons including unfamiliarity with the technologies and high upfront costs. These same barriers will likely hinder the adoption of distributed solar photovoltaic systems as well. The rapid growth in PV deployment in recent years is largely policy-driven and such rapid growth would not be sustainable unless governments continue to expand financial incentives and policy mandates, as well as address regulatory and market barriers.

  17. Maine Bouguer Gravity Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 2 kilometer Bouguer anomaly grid for the state of Maine. Number of columns is 197 and number of rows is 292. The order of the data is from the lower left to the...

  18. Minnesota Bouguer Anomaly Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 1.5 kilometer Bouguer anomaly grid for the state of Minnesota. Number of columns is 404 and number of rows is 463. The order of the data is from the lower left to...

  19. Molecular Grid Membranes

    National Research Council Canada - National Science Library

    Michl, Josef; Magnera, Thomas

    2008-01-01

    ...) porphyrin triply linked in the meso-meso, and both beta-beta positions four times by carbon-carbon bonds to each of its neighbors to form porphite sheets a grid-type material that would be an analog of graphene...

  20. The Grid challenge

    CERN Multimedia

    Lundquest, E

    2003-01-01

    At a customer panel discussion during OracleWorld in San Franciso, grid computing was being pushed as the next big thing - even if panellists couldsn't quite agree on what it is, what it will cost or when it will appear (1 page).

  1. NSTAR Smart Grid Pilot

    Energy Technology Data Exchange (ETDEWEB)

    Rabari, Anil [NSTAR Electric, Manchester, NH (United States); Fadipe, Oloruntomi [NSTAR Electric, Manchester, NH (United States)

    2014-03-31

    NSTAR Electric & Gas Corporation (“the Company”, or “NSTAR”) developed and implemented a Smart Grid pilot program beginning in 2010 to demonstrate the viability of leveraging existing automated meter reading (“AMR”) deployments to provide much of the Smart Grid functionality of advanced metering infrastructure (“AMI”), but without the large capital investment that AMI rollouts typically entail. In particular, a central objective of the Smart Energy Pilot was to enable residential dynamic pricing (time-of-use “TOU” and critical peak rates and rebates) and two-way direct load control (“DLC”) by continually capturing AMR meter data transmissions and communicating through customer-sited broadband connections in conjunction with a standardsbased home area network (“HAN”). The pilot was supported by the U.S. Department of Energy’s (“DOE”) through the Smart Grid Demonstration program. NSTAR was very pleased to not only receive the funding support from DOE, but the guidance and support of the DOE throughout the pilot. NSTAR is also pleased to report to the DOE that it was able to execute and deliver a successful pilot on time and on budget. NSTAR looks for future opportunities to work with the DOE and others in future smart grid projects.

  2. Controlling smart grid adaptivity

    OpenAIRE

    Toersche, Hermen; Nykamp, Stefan; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2012-01-01

    Methods are discussed for planning oriented smart grid control to cope with scenarios with limited predictability, supporting an increasing penetration of stochastic renewable resources. The performance of these methods is evaluated with simulations using measured wind generation and consumption data. Forecast errors are shown to affect worst case behavior in particular, the severity of which depends on the chosen adaptivity strategy and error model.

  3. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  4. Seismometer array station processors

    International Nuclear Information System (INIS)

    Key, F.A.; Lea, T.G.; Douglas, A.

    1977-01-01

    A description is given of the design, construction and initial testing of two types of Seismometer Array Station Processor (SASP), one to work with data stored on magnetic tape in analogue form, the other with data in digital form. The purpose of a SASP is to detect the short period P waves recorded by a UK-type array of 20 seismometers and to edit these on to a a digital library tape or disc. The edited data are then processed to obtain a rough location for the source and to produce seismograms (after optimum processing) for analysis by a seismologist. SASPs are an important component in the scheme for monitoring underground explosions advocated by the UK in the Conference of the Committee on Disarmament. With digital input a SASP can operate at 30 times real time using a linear detection process and at 20 times real time using the log detector of Weichert. Although the log detector is slower, it has the advantage over the linear detector that signals with lower signal-to-noise ratio can be detected and spurious large amplitudes are less likely to produce a detection. It is recommended, therefore, that where possible array data should be recorded in digital form for input to a SASP and that the log detector of Weichert be used. Trial runs show that a SASP is capable of detecting signals down to signal-to-noise ratios of about two with very few false detections, and at mid-continental array sites it should be capable of detecting most, if not all, the signals with magnitude above msub(b) 4.5; the UK argues that, given a suitable network, it is realistic to hope that sources of this magnitude and above can be detected and identified by seismological means alone. (author)

  5. Grid interoperability: the interoperations cookbook

    Energy Technology Data Exchange (ETDEWEB)

    Field, L; Schulz, M [CERN (Switzerland)], E-mail: Laurence.Field@cern.ch, E-mail: Markus.Schulz@cern.ch

    2008-07-01

    Over recent years a number of grid projects have emerged which have built grid infrastructures that are now the computing backbones for various user communities. A significant number of these communities are limited to one grid infrastructure due to the different middleware and procedures used in each grid. Grid interoperation is trying to bridge these differences and enable virtual organizations to access resources independent of the grid project affiliation. This paper gives an overview of grid interoperation and describes the current methods used to bridge the differences between grids. Actual use cases encountered during the last three years are discussed and the most important interfaces required for interoperability are highlighted. A summary of the standardisation efforts in these areas is given and we argue for moving more aggressively towards standards.

  6. Grid interoperability: the interoperations cookbook

    International Nuclear Information System (INIS)

    Field, L; Schulz, M

    2008-01-01

    Over recent years a number of grid projects have emerged which have built grid infrastructures that are now the computing backbones for various user communities. A significant number of these communities are limited to one grid infrastructure due to the different middleware and procedures used in each grid. Grid interoperation is trying to bridge these differences and enable virtual organizations to access resources independent of the grid project affiliation. This paper gives an overview of grid interoperation and describes the current methods used to bridge the differences between grids. Actual use cases encountered during the last three years are discussed and the most important interfaces required for interoperability are highlighted. A summary of the standardisation efforts in these areas is given and we argue for moving more aggressively towards standards

  7. Allegheny County Map Index Grid

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Map Index Sheets from Block and Lot Grid of Property Assessment and based on aerial photography, showing 1983 datum with solid line and NAD 27 with 5 second grid...

  8. The Owens Valley Millimeter Array

    International Nuclear Information System (INIS)

    Padin, S.; Scott, S.L.; Woody, D.P.; Scoville, N.Z.; Seling, T.V.

    1991-01-01

    The telescopes and signal processing systems of the Owens Valley Millimeter Array are considered, and improvements in the sensitivity and stability of the instrument are characterized. The instrument can be applied to map sources in the 85 to 115 GHz and 218 to 265 GHz bands with a resolution of about 1 arcsec in the higher frequency band. The operation of the array is fully automated. The current scientific programs for the array encompass high-resolution imaging of protoplanetary/protostellar disk structures, observations of molecular cloud complexes associated with spiral structure in nearby galaxies, and observations of molecular structures in the nuclei of spiral and luminous IRAS galaxies. 9 refs

  9. Processes for design, construction and utilisation of arrays of light-emitting diodes and light-emitting diode-coupled optical fibres for multi-site brain light delivery.

    Science.gov (United States)

    Bernstein, Jacob G; Allen, Brian D; Guerra, Alexander A; Boyden, Edward S

    2015-05-01

    Optogenetics enables light to be used to control the activity of genetically targeted cells in the living brain. Optical fibers can be used to deliver light to deep targets, and LEDs can be spatially arranged to enable patterned light delivery. In combination, arrays of LED-coupled optical fibers can enable patterned light delivery to deep targets in the brain. Here we describe the process flow for making LED arrays and LED-coupled optical fiber arrays, explaining key optical, electrical, thermal, and mechanical design principles to enable the manufacturing, assembly, and testing of such multi-site targetable optical devices. We also explore accessory strategies such as surgical automation approaches as well as innovations to enable low-noise concurrent electrophysiology.

  10. A 32 x 32 capacitive micromachined ultrasonic transducer array manufactured in standard CMOS.

    Science.gov (United States)

    Lemmerhirt, David F; Cheng, Xiaoyang; White, Robert; Rich, Collin A; Zhang, Man; Fowlkes, J Brian; Kripfgans, Oliver D

    2012-07-01

    As ultrasound imagers become increasingly portable and lower cost, breakthroughs in transducer technology will be needed to provide high-resolution, real-time 3-D imaging while maintaining the affordability needed for portable systems. This paper presents a 32 x 32 ultrasound array prototype, manufactured using a CMUT-in-CMOS approach whereby ultrasonic transducer elements and readout circuits are integrated on a single chip using a standard integrated circuit manufacturing process in a commercial CMOS foundry. Only blanket wet-etch and sealing steps are added to complete the MEMS devices after the CMOS process. This process typically yields better than 99% working elements per array, with less than ±1.5 dB variation in receive sensitivity among the 1024 individually addressable elements. The CMUT pulseecho frequency response is typically centered at 2.1 MHz with a -6 dB fractional bandwidth of 60%, and elements are arranged on a 250 μm hexagonal grid (less than half-wavelength pitch). Multiplexers and CMOS buffers within the array are used to make on-chip routing manageable, reduce the number of physical output leads, and drive the transducer cable. The array has been interfaced to a commercial imager as well as a set of custom transmit and receive electronics, and volumetric images of nylon fishing line targets have been produced.

  11. Building the Support for Radar Processing across Memory Hierarchies: On the Development of an Array Class with Shapes using Expression Templates in C++̂

    National Research Council Canada - National Science Library

    Mullin, Lenore

    2004-01-01

    ...), could be used to develop software for radar and other DSP applications. This software needs to be tuned to use the levels of memory hierarchies efficiently without the materialization of array valued temporaries 3...

  12. GridCom, Grid Commander: graphical interface for Grid jobs and data management; GridCom, Grid Commander: graficheskij interfejs dlya raboty s zadachami i dannymi v gride

    Energy Technology Data Exchange (ETDEWEB)

    Galaktionov, V V

    2011-07-01

    GridCom - the software package for maintenance of automation of access to means of distributed system Grid (jobs and data). The client part, executed in the form of Java-applets, realises the Web-interface access to Grid through standard browsers. The executive part Lexor (LCG Executor) is started by the user in UI (User Interface) machine providing performance of Grid operations

  13. BIG: a Grid Portal for Biomedical Data and Images

    Directory of Open Access Journals (Sweden)

    Giovanni Aloisio

    2004-06-01

    Full Text Available Modern management of biomedical systems involves the use of many distributed resources, such as high performance computational resources to analyze biomedical data, mass storage systems to store them, medical instruments (microscopes, tomographs, etc., advanced visualization and rendering tools. Grids offer the computational power, security and availability needed by such novel applications. This paper presents BIG (Biomedical Imaging Grid, a Web-based Grid portal for management of biomedical information (data and images in a distributed environment. BIG is an interactive environment that deals with complex user's requests, regarding the acquisition of biomedical data, the "processing" and "delivering" of biomedical images, using the power and security of Computational Grids.

  14. Impact of Smart Grid Technologies on Peak Load to 2050

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    The IEA's Smart Grids Technology Roadmap identified five global trends that could be effectively addressed by deploying smart grids. These are: increasing peak load (the maximum power that the grid delivers during peak hours), rising electricity consumption, electrification of transport, deployment of variable generation technologies (e.g. wind and solar PV) and ageing infrastructure. Along with this roadmap, a new working paper -- Impact of Smart Grid Technologies on Peak Load to 2050 -- develops a methodology to estimate the evolution of peak load until 2050. It also analyses the impact of smart grid technologies in reducing peak load for four key regions; OECD North America, OECD Europe, OECD Pacific and China. This working paper is a first IEA effort in an evolving modelling process of smart grids that is considering demand response in residential and commercial sectors as well as the integration of electric vehicles.

  15. Current Grid operation and future role of the Grid

    Science.gov (United States)

    Smirnova, O.

    2012-12-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place

  16. Current Grid operation and future role of the Grid

    International Nuclear Information System (INIS)

    Smirnova, O

    2012-01-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place

  17. Communication technologies in smart grid

    Directory of Open Access Journals (Sweden)

    Miladinović Nikola

    2013-01-01

    Full Text Available The role of communication technologies in Smart Grid lies in integration of large number of devices into one telecommunication system. This paper provides an overview of the technologies currently in use in electric power grid, that are not necessarily in compliance with the Smart Grid concept. Considering that the Smart Grid is open to the flow of information in all directions, it is necessary to provide reliability, protection and security of information.

  18. Systematic cloning of human minisatellites from ordered array charomid libraries.

    Science.gov (United States)

    Armour, J A; Povey, S; Jeremiah, S; Jeffreys, A J

    1990-11-01

    We present a rapid and efficient method for the isolation of minisatellite loci from human DNA. The method combines cloning a size-selected fraction of human MboI DNA fragments in a charomid vector with hybridization screening of the library in ordered array. Size-selection of large MboI fragments enriches for the longer, more variable minisatellites and reduces the size of the library required. The library was screened with a series of multi-locus probes known to detect a large number of hypervariable loci in human DNA. The gridded library allowed both the rapid processing of positive clones and the comparative evaluation of the different multi-locus probes used, in terms of both the relative success in detecting hypervariable loci and the degree of overlap between the sets of loci detected. We report 23 new human minisatellite loci isolated by this method, which map to 14 autosomes and the sex chromosomes.

  19. Grid3: An Application Grid Laboratory for Science

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    level services required by the participating experiments. The deployed infrastructure has been operating since November 2003 with 27 sites, a peak of 2800 processors, work loads from 10 different applications exceeding 1300 simultaneous jobs, and data transfers among sites of greater than 2 TB/day. The Grid3 infrastructure was deployed from grid level services provided by groups and applications within the collaboration. The services were organized into four distinct "grid level services" including: Grid3 Packaging, Monitoring and Information systems, User Authentication and the iGOC Grid Operatio...

  20. Improvements in or relating to cellular grid structures

    International Nuclear Information System (INIS)

    Jolly, R.

    1979-01-01

    In cellular grid structures for positioning an array of nuclear fuel rods by locating them individually in a ferrule joined to its neighbours to form the grid structure, the ferrules are formed in pairs from tubular members each deformed to provide a waist. A bridge piece extends across the waist to divide the tubular member into two cells and it may incorporate a resilient member which projects into the two cells to urge fuel rods in the cells towards co-planar dimples formed in the tubular member opposite the resilient member. (author)